- 04 2月, 2020 10 次提交
-
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_device or drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. checkpatch errors/warnings are fixed manually. @rule1@ identifier func, T; @@ func(...) { ... struct drm_device *T = ...; <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } @rule2@ identifier func, T; @@ func(struct drm_device *T,...) { <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } @rule3@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule4@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-20-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. checkpatch errors/warnings are fixed manually. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-19-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-18-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-17-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-16-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-15-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_device or drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_device *T = ...; <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } @rule2@ identifier func, T; @@ func(struct drm_device *T,...) { <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } @rule3@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule4@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-14-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
Drm specific drm_WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_device struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. checkpatch errors/warnings are fixed manually. @rule1@ identifier func, T; @@ func(...) { ... struct drm_device *T = ...; <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } @rule2@ identifier func, T; @@ func(struct drm_device *T,...) { <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-12-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-11-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_device or drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. checkpatch errors/warnings are fixed manually. @rule1@ identifier func, T; @@ func(...) { ... struct drm_device *T = ...; <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } @rule2@ identifier func, T; @@ func(struct drm_device *T,...) { <... ( -WARN( +drm_WARN(T, ...) | -WARN_ON( +drm_WARN_ON(T, ...) | -WARN_ONCE( +drm_WARN_ONCE(T, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(T, ...) ) ...> } @rule3@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule4@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-10-pankaj.laxminarayan.bharadiya@intel.com
-
- 03 2月, 2020 3 次提交
-
-
由 Chris Wilson 提交于
On seqno rollover, we need to allocate ourselves a new cacheline. This might incur grabbing a new page and pinning it into the GGTT, with some rather unfortunate lockdep implications. To avoid a mutex, and more specifically pinning in the GGTT from inside the kernel context being used to flush the GGTT in emergencies, we will likely need to lift the next-cacheline allocation to a pre-reservation. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200203094152.4150550-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Inside the intel_timeline_get_seqno(), we currently track the retirement of the old cachelines by listening to the new request. This requires that the new request is ready to be used and so requires a minimum bit of initialisation prior to getting the new seqno. Fixes: b1e3177b ("drm/i915: Coordinate i915_active with its own mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200203094152.4150550-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Take a reference to the previous exclusive fence on the i915_active, as we wish to add an await to it in the caller (and so must prevent it from being freed until we have completed that task). Fixes: e3793468 ("drm/i915: Use the async worker to avoid reclaim tainting the ggtt->mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200203094152.4150550-1-chris@chris-wilson.co.uk
-
- 02 2月, 2020 4 次提交
-
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-5-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. checkpatch errors/warnings are fixed manually. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-3-pankaj.laxminarayan.bharadiya@intel.com
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200128181603.27767-2-pankaj.laxminarayan.bharadiya@intel.com
-
由 Daniele Ceraolo Spurio 提交于
Now that intel_engine_apply_workarounds is called on all gens, we can use the engine workaround lists for pre-gen8 workarounds as well to be consistent in the way we handle and dump the WAs. v2: Ignore the sanity check of MI_MODE on Broadwater, for whatever reason it is not sticking. Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200201194004.3622493-1-chris@chris-wilson.co.uk
-
- 01 2月, 2020 3 次提交
-
-
由 Chris Wilson 提交于
A masked register does not need rmw to update, and it is best not to use such a sequence. Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200131235035.3522102-1-chris@chris-wilson.co.uk
-
由 Daniele Ceraolo Spurio 提交于
The workarounds are a common "feature" across gens and submission mechanisms and we already call the other WA related functions from common engine ones (<setup/cleanup>_common), so it makes sense to do the same with WA application. Medium-term, This will help us reduce the duplication once the GuC resume function is added, but short term it will also allow us to use the workaround lists for pre-gen8 engine workarounds. Signed-off-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200131075716.2212299-2-chris@chris-wilson.co.uk
-
由 Michal Wajdeczko 提交于
We already have guc_is_running function, but it only reflects firmware status, while to fully use GuC we need to know if we've already established communication with it. v2: also s/intel_guc_is_running/intel_guc_is_fw_running (Chris) Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200131153706.109528-1-michal.wajdeczko@intel.com
-
- 31 1月, 2020 20 次提交
-
-
由 Chris Wilson 提交于
If the heartbeat fires in the middle of the preempt-hang test, it consumes our forced hang disrupting the test. Reported-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200131130319.2998318-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since PIN_GLOBAL is no longer guaranteed to be synchronous, we must not forget to include a wait-for-vma prior to execution. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200131142610.3100998-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the rare cases where we are using the global GGTT for execution in the selftests, we have marked them with PIN_USER knowing that they will be bound as PIN_GLOBAL as well. However, we need to catch the extra flag in deciding to use the async worker for such binds as well. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200131081543.2251298-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
To enable non-persistent contexts, we require a means of cancelling any inflight work from that context. This is first done "gracefully" by using preemption to kick the active context off the engine, and then forcefully by resetting the engine if it is active. If we are unable to reset the engine to remove hostile userspace, we should not allow userspace to opt into using non-persistent contexts. If the per-engine reset fails, we still do a full GPU reset, but that is rare and usually indicative of much deeper issues. The damage is already done. However, the goal of the interface to allow long running compute jobs without causing collateral damage elsewhere, and if we are unable to support that we should make that known by not providing the interface (and falsely pretending we can). Fixes: a0e04715 ("drm/i915/gem: Make context persistence optional") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200130164553.1937718-1-chris@chris-wilson.co.uk
-
由 Ville Syrjälä 提交于
Let's add a copy of the active_pipes bitmask into the cdclk_state. While this is duplicating a bit of information we may already have elsewhere, I think it's worth it to decopule the cdclk stuff from whatever else wants to use that bitmask. Also we want to get rid of all the old ad-hoc global state which is what the current bitmask is, so this removes one obstacle. The one extra thing we have to remember is write locking the cdclk state whenever the bitmask changes. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-19-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Let's convert cdclk_state to be a proper global state. That allows us to use the regular atomic old vs. new state accessor, hopefully making the code less confusing. We do have to deal with a few more error cases in case the cdclk state duplication fails. But so be it. v2: Fix new plane min_cdclk vs. old crtc min_cdclk check Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200121140353.25997-1-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Extract a small helper to compute the active pipes bitmask based on the old bitmask + the crtcs in the atomic state. I want to decouple the cdclk state entirely from the current global state so I want to track the active pipes also inside the (to be introduced) full cdclk state. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-17-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Now that we have the more formal global state thing let's use if for memory bandwidth tracking. No real difference to the current private object usage since we already tried to avoid taking the single serializing lock needlessly. But since we're going to roll the global state out to more things probably a good idea to unify the approaches a bit. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-16-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Our current global state handling is pretty ad-hoc. Let's try to make it better by imitating the standard drm core private object approach. The reason why we don't want to directly use the private objects is locking; Each private object has its own lock so if we introduce any global private objects we get serialized by that single lock across all pipes. The global state apporoach instead uses a read/write lock type of approach where each individual crtc lock counts as a read lock, and grabbing all the crtc locks allows one write access. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-15-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Move intel_atomic_state_free() next to its counterpart. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-13-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Give the cdclk init/uninit functions a _hw suffix to make it clear they are about initializing the actual hardware. I'll be wanting to to add a intel_cdclk_init() which is purely initializing software structures. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-12-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
To make life less confusing let's swap() the entire cdclk state rather than swapping some parts, copying other parts, and leaving the rest just as is. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-11-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Use the same structure to store the cdclk state in both intel_atomic_state and dev_priv. First step towards proper old vs. new cdclk states. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-10-ville.syrjala@linux.intel.comReviewed-by: NJosé Roberto de Souza <jose.souza@intel.com>
-
由 Ville Syrjälä 提交于
Move all the old vs. new state shenanigans into intel_set_cdclk_{pre,post}_plane_update() so that the caller doesn't need to know any of it. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-9-ville.syrjala@linux.intel.comReviewed-by: NJosé Roberto de Souza <jose.souza@intel.com>
-
由 Ville Syrjälä 提交于
I want to have a higher level cdclk state object so let's rename the current lower level thing to cdclk_config (because I lack imagination). Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-8-ville.syrjala@linux.intel.comReviewed-by: NJosé Roberto de Souza <jose.souza@intel.com>
-
由 Ville Syrjälä 提交于
intel_cdclk_needs_cd2x_update() is named rather confusingly. We don't have to do a cd2x update, rather we are allowed to do one (as opposed to a full PLL reprogramming with its heavy handed modeset). So let's rename the function to intel_cdclk_can_cd2x_update(). Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-7-ville.syrjala@linux.intel.comReviewed-by: NImre Deak <imre.deak@intel.com>
-
由 Ville Syrjälä 提交于
Move the min_cdclk[] and min_voltage_level[] arrays under the rest of the cdclk state. And while at it provide a simple helper (intel_cdclk_clear_state()) to clear the state during the ww_mutex backoff dance. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-6-ville.syrjala@linux.intel.comReviewed-by: NJosé Roberto de Souza <jose.souza@intel.com>
-
由 Ville Syrjälä 提交于
Move the initial setup of state->{cdclk,min_cdclk[],min_voltage_level[]} into intel_modeset_calc_cdclk(), and we'll move the counterparts into intel_cdclk_swap_state(). This encapsulates the cdclk state much better. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-5-ville.syrjala@linux.intel.comReviewed-by: NJosé Roberto de Souza <jose.souza@intel.com>
-
由 Ville Syrjälä 提交于
The dirty_pipes bitmask is now unused. Get rid of it. Reviewed-by: NStanislav Lisovskiy <stanislav.lisovskiy@intel.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-4-ville.syrjala@linux.intel.com
-
由 Ville Syrjälä 提交于
The linetime watermarks really have very little in common with the plane watermarks. It looks to be cleaner to simply track them in the crtc_state and program them from the normal modeset/fastset paths. The only dark cloud comes from the fact that the register is still supposedly single buffered. So in theory it might still need some form of two stage programming. Note that even though HSW/BDWhave two stage programming we never computed any special intermediate values for the linetime watermarks, and on SKL+ we don't even have the two stage stuff plugged in since everything else is double buffered. So let's assume it's all fine and continue doing what we've been doing. Actually on HSW/BDW the value should not even change without a full modeset since it doesn't account for pfit downscaling. Thus only fastboot might be affected. But on SKL+ the pfit scaling factor is take into consideration so the value may change during any fastset. As a bonus we'll plug this thing into the state checker/dump now. v2: Rebase due to bigjoiner prep v2: Only compute ips linetime for IPS capable pipes. Bspec says the register values is ignored for other pipes, but in fact it can't even be written so the state checker becomes unhappy if we don't compute it as zero. Cc: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200120174728.21095-3-ville.syrjala@linux.intel.comReviewed-by: NStanislav Lisovskiy <stanislav.lisovskiy@intel.com>
-