1. 04 12月, 2018 4 次提交
  2. 21 11月, 2018 1 次提交
  3. 16 11月, 2018 1 次提交
    • C
      drm/i915/selftests: Workaround an issue with unused lockdep subclass · f911e723
      Chris Wilson 提交于
      lockdep insists that if we give a lock a subclass, it must be used.
      Failure to do so triggers a self-consistency check when reading
      lockdep_stats:
      
      [   49.902002] DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused)
      [   49.902009] WARNING: CPU: 3 PID: 383 at kernel/locking/lockdep_proc.c:249 lockdep_stats_show+0x984/0xa10
      [   49.902026] Modules linked in: nls_ascii nls_cp437 vfat fat crct10dif_pclmul crc32_pclmul crc32c_intel aesni_intel aes_x86_64 crypto_simd cryptd glue_helper intel_cstate intel_uncore intel_rapl_perf intel_gtt efivars prime_numbers ahci libahci i2c_i801 video button efivarfs [last unloaded: drm_kms_helper]
      [   49.902059] CPU: 3 PID: 383 Comm: cat Tainted: G     U            4.20.0-rc2+ #304
      [   49.902068] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
      [   49.902079] RIP: 0010:lockdep_stats_show+0x984/0xa10
      [   49.902086] Code: 00 85 c0 0f 84 aa f8 ff ff 8b 05 77 37 e2 00 85 c0 0f 85 9c f8 ff ff 48 c7 c6 e0 57 bc 81 48 c7 c7 28 30 bb 81 e8 6b 77 fa ff <0f> 0b e9 82 f8 ff ff 48 c7 44 24 50 00 00 00 00 45 31 e4 31 db 31
      [   49.902103] RSP: 0018:ffffc90000247d58 EFLAGS: 00010292
      [   49.902110] RAX: 0000000000000044 RBX: 00000000000002f0 RCX: 0000000000000000
      [   49.902118] RDX: 0000000000000002 RSI: 0000000000000001 RDI: ffffffff810b3464
      [   49.902126] RBP: 0000000000000039 R08: 0000000000000002 R09: 0000000000000000
      [   49.902133] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000007ead
      [   49.902141] R13: 0000000000000001 R14: ffff88884c021000 R15: 0000000000000097
      [   49.902150] FS:  00007fb347e66540(0000) GS:ffff88885e600000(0000) knlGS:0000000000000000
      [   49.902159] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   49.902165] CR2: 00007fb347aeb000 CR3: 00000008544bd005 CR4: 00000000001606e0
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20181115203851.25739-1-chris@chris-wilson.co.uk
      f911e723
  4. 30 10月, 2018 1 次提交
    • R
      drm/i915: Prefer IS_GEN<n> check with bitmask. · 9e783375
      Rodrigo Vivi 提交于
      Whenever possible we should stick with IS_GEN<n> checks.
      
      Bitmaks has been introduced on commit ae7617f0 ("drm/i915:
      Allow optimized platform checks") for efficiency.
      
      Let's stick with it whenever possible.
      
      This patch was generated with coccinelle:
      
      spatch -sp_file is_gen.cocci *{c,h} --in-place
      
      is_gen.cocci:
      @gen2@ expression e; @@
      -INTEL_GEN(e) == 2
      +IS_GEN2(e)
      @gen3@ expression e; @@
      -INTEL_GEN(e) == 3
      +IS_GEN3(e)
      @gen4@ expression e; @@
      -INTEL_GEN(e) == 4
      +IS_GEN4(e)
      @gen5@ expression e; @@
      -INTEL_GEN(e) == 5
      +IS_GEN5(e)
      @gen6@ expression e; @@
      -INTEL_GEN(e) == 6
      +IS_GEN6(e)
      @gen7@ expression e; @@
      -INTEL_GEN(e) == 7
      +IS_GEN7(e)
      @gen8@ expression e; @@
      -INTEL_GEN(e) == 8
      +IS_GEN8(e)
      @gen9@ expression e; @@
      -INTEL_GEN(e) == 9
      +IS_GEN9(e)
      @gen10@ expression e; @@
      -INTEL_GEN(e) == 10
      +IS_GEN10(e)
      @gen11@ expression e; @@
      -INTEL_GEN(e) == 11
      +IS_GEN11(e)
      
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20181026195143.20353-1-rodrigo.vivi@intel.com
      9e783375
  5. 18 10月, 2018 1 次提交
    • T
      drm/i915: GEM_WARN_ON considered harmful · bbb8a9d7
      Tvrtko Ursulin 提交于
      GEM_WARN_ON currently has dangerous semantics where it is completely
      compiled out on !GEM_DEBUG builds. This can leave users who expect it to
      be more like a WARN_ON, just without a warning in non-debug builds, in
      complete ignorance.
      
      Another gotcha with it is that it cannot be used as a statement. Which is
      again different from a standard kernel WARN_ON.
      
      This patch fixes both problems by making it behave as one would expect.
      
      It can now be used both as an expression and as statement, and also the
      condition evaluates properly in all builds - code under the conditional
      will therefore not unexpectedly disappear.
      
      To satisfy call sites which really want the code under the conditional to
      completely disappear, we add GEM_DEBUG_WARN_ON and convert some of the
      callers to it. This one can also be used as both expression and statement.
      
      >From the above it follows GEM_DEBUG_WARN_ON should be used in situations
      where we are certain the condition will be hit during development, but at
      a place in code where error can be handled to the benefit of not crashing
      the machine.
      
      GEM_WARN_ON on the other hand should be used where condition may happen in
      production and we just want to distinguish the level of debugging output
      emitted between the production and debug build.
      
      v2:
       * Dropped BUG_ON hunk.
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Cc: Tomasz Lis <tomasz.lis@intel.com>
      Reviewed-by: NTomasz Lis <tomasz.lis@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20181012063142.16080-1-tvrtko.ursulin@linux.intel.com
      bbb8a9d7
  6. 17 10月, 2018 1 次提交
  7. 12 10月, 2018 1 次提交
  8. 01 10月, 2018 1 次提交
  9. 26 9月, 2018 1 次提交
  10. 14 9月, 2018 1 次提交
  11. 04 9月, 2018 2 次提交
  12. 21 8月, 2018 1 次提交
  13. 15 8月, 2018 1 次提交
  14. 07 8月, 2018 1 次提交
  15. 27 7月, 2018 1 次提交
  16. 24 7月, 2018 1 次提交
  17. 14 7月, 2018 1 次提交
  18. 11 7月, 2018 1 次提交
  19. 07 7月, 2018 1 次提交
  20. 06 7月, 2018 1 次提交
  21. 05 7月, 2018 1 次提交
  22. 29 6月, 2018 3 次提交
    • C
      drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd) · 9512f985
      Chris Wilson 提交于
      Back in commit 27af5eea ("drm/i915: Move execlists irq handler to a
      bottom half"), we came to the conclusion that running our CSB processing
      and ELSP submission from inside the irq handler was a bad idea. A really
      bad idea as we could impose nearly 1s latency on other users of the
      system, on average! Deferring our work to a tasklet allowed us to do the
      processing with irqs enabled, reducing the impact to an average of about
      50us.
      
      We have since eradicated the use of forcewaked mmio from inside the CSB
      processing and ELSP submission, bringing the impact down to around 5us
      (on Kabylake); an order of magnitude better than our measurements 2
      years ago on Broadwell and only about 2x worse on average than the
      gem_syslatency on an unladen system.
      
      In this iteration of the tasklet-vs-direct submission debate, we seek a
      compromise where by we submit new requests immediately to the HW but
      defer processing the CS interrupt onto a tasklet. We gain the advantage
      of low-latency and ksoftirqd avoidance when waking up the HW, while
      avoiding the system-wide starvation of our CS irq-storms.
      
      Comparing the impact on the maximum latency observed (that is the time
      stolen from an RT process) over a 120s interval, repeated several times
      (using gem_syslatency, similar to RT's cyclictest) while the system is
      fully laden with i915 nops, we see that direct submission an actually
      improve the worse case.
      
      Maximum latency in microseconds of a third party RT thread
      (gem_syslatency -t 120 -f 2)
        x Always using tasklets (a couple of >1000us outliers removed)
        + Only using tasklets from CS irq, direct submission of requests
      +------------------------------------------------------------------------+
      |          +                                                             |
      |          +                                                             |
      |          +                                                             |
      |          +       +                                                     |
      |          + +     +                                                     |
      |       +  + +     +  x     x     x                                      |
      |      +++ + +     +  x  x  x  x  x  x                                   |
      |      +++ + ++  + +  *x x  x  x  x  x                                   |
      |      +++ + ++  + *  *x x  *  x  x  x                                   |
      |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
      |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
      |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
      |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
      |             |__________MA___________|                                  |
      |      |______M__A________|                                              |
      +------------------------------------------------------------------------+
          N           Min           Max        Median           Avg        Stddev
      x 118            91           186           124     125.28814     16.279137
      + 120            92           187           109     112.00833     13.458617
      Difference at 95.0% confidence
      	-13.2798 +/- 3.79219
      	-10.5994% +/- 3.02677%
      	(Student's t, pooled s = 14.9237)
      
      However the mean latency is adversely affected:
      
      Mean latency in microseconds of a third party RT thread
      (gem_syslatency -t 120 -f 1)
        x Always using tasklets
        + Only using tasklets from CS irq, direct submission of requests
      +------------------------------------------------------------------------+
      |           xxxxxx                                        +   ++         |
      |           xxxxxx                                        +   ++         |
      |           xxxxxx                                      + +++ ++         |
      |           xxxxxxx                                     +++++ ++         |
      |           xxxxxxx                                     +++++ ++         |
      |           xxxxxxx                                     +++++ +++        |
      |           xxxxxxx                                   + ++++++++++       |
      |           xxxxxxxx                                 ++ ++++++++++       |
      |           xxxxxxxx                                 ++ ++++++++++       |
      |          xxxxxxxxxx                                +++++++++++++++     |
      |         xxxxxxxxxxx    x                           +++++++++++++++     |
      |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
      |           |__A__|                                                      |
      |                                                      |____A___|        |
      +------------------------------------------------------------------------+
          N           Min           Max        Median           Avg        Stddev
      x 120         3.506         3.727         3.631     3.6321417    0.02773109
      + 120         3.834         4.149         4.039     4.0375167   0.041221676
      Difference at 95.0% confidence
      	0.405375 +/- 0.00888913
      	11.1608% +/- 0.244735%
      	(Student's t, pooled s = 0.03513)
      
      However, since the mean latency corresponds to the amount of irqsoff
      processing we have to do for a CS interrupt, we only need to speed that
      up to benefit not just system latency but our own throughput.
      
      v2: Remember to defer submissions when under reset.
      v4: Only use direct submission for new requests
      v5: Be aware that with mixing direct tasklet evaluation and deferred
      tasklets, we may end up idling before running the deferred tasklet.
      v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
      annotation to reset_in_progress().
      v7: Take the full timeline.lock when enabling perf_pmu stats as the
      tasklet is no longer a valid guard. A consequence is that the stats are
      now only valid for engines also using the timeline.lock to process
      state.
      
      Testcase: igt/gem_exec_latency/*rthog*
      References: 27af5eea ("drm/i915: Move execlists irq handler to a bottom half")
      Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
      9512f985
    • C
      drm/i915/execlists: Trust the CSB · fd8526e5
      Chris Wilson 提交于
      Now that we use the CSB stored in the CPU friendly HWSP, we do not need
      to track interrupts for when the mmio CSB registers are valid and can
      just check where we read up to last from the cached HWSP. This means we
      can forgo the atomic bit tracking from interrupt, and in the next patch
      it means we can check the CSB at any time.
      
      v2: Change the splitting inside reset_prepare, we only want to lose
      testing the interrupt in this patch, the next patch requires the change
      in locking
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
      fd8526e5
    • C
      drm/i915/execlists: Unify CSB access pointers · bc4237ec
      Chris Wilson 提交于
      Following the removal of the last workarounds, the only CSB mmio access
      is for the old vGPU interface. The mmio registers presented by vGPU do
      not require forcewake and can be treated as ordinary volatile memory,
      i.e. they behave just like the HWSP access just at a different location.
      We can reduce the CSB access to a set of read/write/buffer pointers and
      treat the various paths identically and not worry about forcewake.
      (Forcewake is nightmare for worstcase latency, and we want to process
      this all with irqsoff -- no latency allowed!)
      
      v2: Comments, comments, comments. Well, 2 bonus comments.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
      bc4237ec
  23. 21 6月, 2018 1 次提交
    • C
      drm/i915: Disable bh around call to tasklet · 26eb4cd6
      Chris Wilson 提交于
      The guc submission backends expects to only be run from (at least)
      softirq context, but during our intel_engine_is_idle() check we would
      call into the tasklet to make sure it was flushed. As this could occur
      from process context, occasionally we would be caught out using a
      wait_for_atomic() not from an atomic context:
      
      [   59.939091] WARN_ON_ONCE((1) && !(preempt_count() != 0))
      [   59.939142] WARNING: CPU: 1 PID: 2901 at drivers/gpu/drm/i915/intel_guc_submission.c:615 guc_submission_tasklet+0x784/0xa90 [i915]
      [   59.939143] Modules linked in: vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic i915 x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul snd_hda_intel crc32_pclmul snd_hda_codec ghash_clmulni_intel snd_hwdep snd_hda_core e1000e snd_pcm mei_me mei prime_numbers
      [   59.939164] CPU: 1 PID: 2901 Comm: gem_exec_schedu Tainted: G     U  W         4.18.0-rc1-g93475d62c730-drmtip_67+ #1
      [   59.939165] Hardware name: System manufacturer System Product Name/Z170M-PLUS, BIOS 3610 03/29/2018
      [   59.939188] RIP: 0010:guc_submission_tasklet+0x784/0xa90 [i915]
      [   59.939189] Code: fc ff ff 80 3d 2f 87 11 00 00 0f 85 80 fb ff ff 48 c7 c6 f8 49 40 c0 48 c7 c7 80 41 3e c0 c6 05 14 87 11 00 01 e8 2c ea d6 d3 <0f> 0b e9 5f fb ff ff 8b 46 38 89 cf 31 c7 83 e7 c0 75 08 39 c1 0f
      [   59.939253] RSP: 0018:ffffaafe08a03c10 EFLAGS: 00010286
      [   59.939255] RAX: 0000000000000000 RBX: ffff8f9112c246f0 RCX: 0000000000000001
      [   59.939256] RDX: 0000000080000001 RSI: ffffffff95086d8e RDI: 00000000ffffffff
      [   59.939257] RBP: ffff8f9112c24680 R08: 000000009517be77 R09: 0000000000000000
      [   59.939258] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8f9112c24700
      [   59.939259] R13: ffff8f9112c24700 R14: 0000000000000000 R15: ffff8f9112c242a8
      [   59.939260] FS:  00007fc2cc7e5980(0000) GS:ffff8f9136c40000(0000) knlGS:0000000000000000
      [   59.939261] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   59.939262] CR2: 00007fc2cc815040 CR3: 000000021f10e003 CR4: 00000000003606e0
      [   59.939263] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [   59.939264] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [   59.939265] Call Trace:
      [   59.939288]  ? intel_engine_is_idle+0x64/0x160 [i915]
      [   59.939323]  ? intel_engine_dump+0x638/0x890 [i915]
      [   59.939327]  ? seq_printf+0x49/0x70
      [   59.939353]  ? i915_engine_info+0xc8/0x100 [i915]
      [   59.939356]  ? drm_get_color_range_name+0x20/0x20
      [   59.939361]  ? seq_read+0xf1/0x470
      [   59.939365]  ? trace_hardirqs_on_caller+0xe0/0x1b0
      [   59.939370]  ? full_proxy_read+0x51/0x80
      [   59.939389]  ? __vfs_read+0x31/0x170
      [   59.939395]  ? do_sys_open+0x13b/0x240
      [   59.939398]  ? rcu_read_lock_sched_held+0x6f/0x80
      [   59.939401]  ? vfs_read+0x9e/0x140
      [   59.939404]  ? ksys_read+0x50/0xc0
      [   59.939409]  ? do_syscall_64+0x55/0x190
      [   59.939412]  ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
      [   59.939420] irq event stamp: 552834
      [   59.939422] hardirqs last  enabled at (552833): [<ffffffff940fc74c>] console_unlock+0x3fc/0x600
      [   59.939425] hardirqs last disabled at (552834): [<ffffffff94a0111c>] error_entry+0x7c/0x100
      [   59.939451] softirqs last  enabled at (552614): [<ffffffffc02e0f53>] i915_request_add+0x2e3/0x7b0 [i915]
      [   59.939470] softirqs last disabled at (552604): [<ffffffffc02e0ecb>] i915_request_add+0x25b/0x7b0 [i915]
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106977
      Fixes: dd0cf235 ("drm/i915: Speed up idle detection by kicking the tasklets")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Reviewed-by: NMichel Thierry <michel.thierry@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180620135929.23956-1-chris@chris-wilson.co.uk
      26eb4cd6
  24. 18 6月, 2018 1 次提交
    • C
      drm/i915: Fix fallout of fake reset along resume · 4fdd5b4e
      Chris Wilson 提交于
      commit b2209e62 ("drm/i915/execlists: Reset the CSB head tracking on
      reset/sanitization") and commit 1288786b ("drm/i915: Move GEM sanitize
      from resume_early to resume") show the conflicting requirements on the
      code. We must reset the GPU before trashing live state on a fast resume
      (hibernation debug, or error paths), but we must only reset our state
      tracking iff the GPU is reset (or power cycled). This is tricky if we
      are disabling GPU reset to simulate broken hardware; we reset our state
      tracking but the GPU is left intact and recovers from its stale state.
      
      v2: Again without the assertion for forcewake, no longer required since
      commit b3ee09a4 ("drm/i915/ringbuffer: Fix context restore upon reset")
      as the contexts are reset from the CS ensuring everything is powered up.
      
      Fixes: b2209e62 ("drm/i915/execlists: Reset the CSB head tracking on reset/sanitization")
      Fixes: 1288786b ("drm/i915: Move GEM sanitize from resume_early to resume")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180616202534.18767-1-chris@chris-wilson.co.uk
      4fdd5b4e
  25. 14 6月, 2018 3 次提交
  26. 12 6月, 2018 1 次提交
  27. 11 6月, 2018 1 次提交
    • C
      drm/i915/ringbuffer: Fix context restore upon reset · b3ee09a4
      Chris Wilson 提交于
      The discovery with trying to enable full-ppgtt was that we were
      completely failing to the load both the mm and context following the
      reset. Although we were performing mmio to set the PP_DIR (per-process
      GTT) and CCID (context), these were taking no effect (the assumption was
      that this would trigger reload of the context and restore the page
      tables). It was not until we performed the LRI + MI_SET_CONTEXT in a
      following context switch would anything occur.
      
      Since we are then required to reset the context image and PP_DIR using
      CS commands, we place those commands into every batch. The hardware
      should recognise the no-ops and eliminate the expensive context loads,
      but we still have to pay the cost of using cross-powerwell register
      writes. In practice, this has no effect on actual context switch times,
      and only adds a few hundred nanoseconds to no-op switches. We can improve
      the latter by eliminating the w/a around known no-op switches, but there
      is an ulterior motive to keeping them.
      
      Always emitting the context switch at the beginning of the request (and
      relying on HW to skip unneeded switches) does have one key advantage.
      Should we implement request reordering on Haswell, we will not know in
      advance what the previous executing context was on the GPU and so we
      would not be able to elide the MI_SET_CONTEXT commands ourselves and
      always have to emit them. Having our hand forced now actually prepares
      us for later.
      
      Now since that context and mm follow the request, we no longer (and not
      for a long time since requests took over!) require a trace point to tell
      when we write the switch into the ring, since it is always. (This is
      even more important when you remember that simply writing into the ring
      bears no relation to the current mm.)
      
      v2: Sandybridge has to agree to use LRI as well.
      
      Testcase: igt/drv_selftests/live_hangcheck
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Matthew Auld <matthew.william.auld@gmail.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180611110845.31890-1-chris@chris-wilson.co.uk
      b3ee09a4
  28. 06 6月, 2018 1 次提交
  29. 31 5月, 2018 1 次提交
  30. 24 5月, 2018 2 次提交
    • Y
      drm/i915/icl: Enable WaProgramMgsrForCorrectSliceSpecificMmioReads · d78fa508
      Yunwei Zhang 提交于
      WaProgramMgsrForCorrectSliceSpecificMmioReads applies for Icelake as
      well.
      
      References: HSD#1405586840, BSID#0575
      
      v2:
       - GEN11 mask is different from its predecessors. (Oscar)
       - Better separate GEN10 and GEN11. (Oscar)
      
      Cc: Oscar Mateo <oscar.mateo@intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Signed-off-by: NYunwei Zhang <yunwei.zhang@intel.com>
      Reviewed-by: NOscar Mateo <oscar.mateo@intel.com>
      Signed-off-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/1526683232-24753-1-git-send-email-yunwei.zhang@intel.com
      d78fa508
    • Y
      drm/i915/cnl: Implement WaProgramMgsrForCorrectSliceSpecificMmioReads · 1e40d4ae
      Yunwei Zhang 提交于
      WaProgramMgsrForCorrectSliceSpecificMmioReads dictate that before any MMIO
      read into Slice/Subslice specific registers, MCR packet control
      register(0xFDC) needs to be programmed to point to any enabled
      slice/subslice pair. Otherwise, incorrect value will be returned.
      
      However, that means each subsequent MMIO read will be forwarded to a
      specific slice/subslice combination as read is unicast. This is OK since
      slice/subslice specific register values are consistent in almost all cases
      across slice/subslice. There are rare occasions such as INSTDONE that this
      value will be dependent on slice/subslice combo, in such cases, we need to
      program 0xFDC and recover this after. This is already covered by
      read_subslice_reg.
      
      Also, 0xFDC will lose its information after TDR/engine reset/power state
      change.
      
      References: HSD#1405586840, BSID#0575
      
      v2:
       - use fls() instead of find_last_bit() (Chris)
       - added INTEL_SSEU to extract sseu from device info. (Chris)
      v3:
       - rebase on latest tip
      v5:
       - Added references (Mika)
       - Change the ordered of passing arguments and etc. (Ursulin)
      v7:
       - Moved WA explanation Comments(Oscar)
       - Rebased.
      v8:
       - Renamed sanitize_mcr to calculate_s_ss_select. (Oscar)
       - calculate s/ss selector instead of whole mcr. (Oscar)
      v9:
       - Updated function name (Oscar)
       - Remove redundant variables (Oscar)
      v10:
       - Separate pre-GEN10 and GEN11 mask. (Oscar)
      
      Cc: Oscar Mateo <oscar.mateo@intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Signed-off-by: NYunwei Zhang <yunwei.zhang@intel.com>
      Reviewed-by: NOscar Mateo <oscar.mateo@intel.com>
      Signed-off-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/1526683197-24656-1-git-send-email-yunwei.zhang@intel.com
      1e40d4ae
  31. 19 5月, 2018 1 次提交