1. 06 7月, 2018 12 次提交
  2. 05 7月, 2018 9 次提交
  3. 04 7月, 2018 2 次提交
  4. 03 7月, 2018 5 次提交
  5. 02 7月, 2018 1 次提交
  6. 30 6月, 2018 3 次提交
  7. 29 6月, 2018 8 次提交
    • M
      drm/i915: Remove delayed FBC activation. · 45720959
      Maarten Lankhorst 提交于
      The only time we should start FBC is when we have waited a vblank
      after the atomic update. We've already forced a vblank wait by doing
      wait_for_flip_done before intel_post_plane_update(), so we don't need
      to wait a second time before enabling.
      
      Removing the worker simplifies the code and removes possible race
      conditions, like happening in 103167.
      
      Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-2-maarten.lankhorst@linux.intel.comReviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      45720959
    • M
      drm/i915: Block enabling FBC until flips have been completed · c9855a56
      Maarten Lankhorst 提交于
      There is a small race window in which FBC can be enabled after
      pre_plane_update is called, but before the page flip has been
      queued or completed.
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103167
      Link: https://patchwork.freedesktop.org/patch/msgid/20180625163758.10871-1-maarten.lankhorst@linux.intel.comReviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      c9855a56
    • C
      drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd) · 9512f985
      Chris Wilson 提交于
      Back in commit 27af5eea ("drm/i915: Move execlists irq handler to a
      bottom half"), we came to the conclusion that running our CSB processing
      and ELSP submission from inside the irq handler was a bad idea. A really
      bad idea as we could impose nearly 1s latency on other users of the
      system, on average! Deferring our work to a tasklet allowed us to do the
      processing with irqs enabled, reducing the impact to an average of about
      50us.
      
      We have since eradicated the use of forcewaked mmio from inside the CSB
      processing and ELSP submission, bringing the impact down to around 5us
      (on Kabylake); an order of magnitude better than our measurements 2
      years ago on Broadwell and only about 2x worse on average than the
      gem_syslatency on an unladen system.
      
      In this iteration of the tasklet-vs-direct submission debate, we seek a
      compromise where by we submit new requests immediately to the HW but
      defer processing the CS interrupt onto a tasklet. We gain the advantage
      of low-latency and ksoftirqd avoidance when waking up the HW, while
      avoiding the system-wide starvation of our CS irq-storms.
      
      Comparing the impact on the maximum latency observed (that is the time
      stolen from an RT process) over a 120s interval, repeated several times
      (using gem_syslatency, similar to RT's cyclictest) while the system is
      fully laden with i915 nops, we see that direct submission an actually
      improve the worse case.
      
      Maximum latency in microseconds of a third party RT thread
      (gem_syslatency -t 120 -f 2)
        x Always using tasklets (a couple of >1000us outliers removed)
        + Only using tasklets from CS irq, direct submission of requests
      +------------------------------------------------------------------------+
      |          +                                                             |
      |          +                                                             |
      |          +                                                             |
      |          +       +                                                     |
      |          + +     +                                                     |
      |       +  + +     +  x     x     x                                      |
      |      +++ + +     +  x  x  x  x  x  x                                   |
      |      +++ + ++  + +  *x x  x  x  x  x                                   |
      |      +++ + ++  + *  *x x  *  x  x  x                                   |
      |    + +++ + ++  * * +*xxx  *  x  x  xx                                  |
      |    * +++ + ++++* *x+**xx+ *  x  x xxxx x                               |
      |   **x++++*++**+*x*x****x+ * +x xx xxxx x          x                    |
      |x* ******+***************++*+***xxxxxx* xx*x     xxx +                x+|
      |             |__________MA___________|                                  |
      |      |______M__A________|                                              |
      +------------------------------------------------------------------------+
          N           Min           Max        Median           Avg        Stddev
      x 118            91           186           124     125.28814     16.279137
      + 120            92           187           109     112.00833     13.458617
      Difference at 95.0% confidence
      	-13.2798 +/- 3.79219
      	-10.5994% +/- 3.02677%
      	(Student's t, pooled s = 14.9237)
      
      However the mean latency is adversely affected:
      
      Mean latency in microseconds of a third party RT thread
      (gem_syslatency -t 120 -f 1)
        x Always using tasklets
        + Only using tasklets from CS irq, direct submission of requests
      +------------------------------------------------------------------------+
      |           xxxxxx                                        +   ++         |
      |           xxxxxx                                        +   ++         |
      |           xxxxxx                                      + +++ ++         |
      |           xxxxxxx                                     +++++ ++         |
      |           xxxxxxx                                     +++++ ++         |
      |           xxxxxxx                                     +++++ +++        |
      |           xxxxxxx                                   + ++++++++++       |
      |           xxxxxxxx                                 ++ ++++++++++       |
      |           xxxxxxxx                                 ++ ++++++++++       |
      |          xxxxxxxxxx                                +++++++++++++++     |
      |         xxxxxxxxxxx    x                           +++++++++++++++     |
      |x       xxxxxxxxxxxxx   x           +            + ++++++++++++++++++  +|
      |           |__A__|                                                      |
      |                                                      |____A___|        |
      +------------------------------------------------------------------------+
          N           Min           Max        Median           Avg        Stddev
      x 120         3.506         3.727         3.631     3.6321417    0.02773109
      + 120         3.834         4.149         4.039     4.0375167   0.041221676
      Difference at 95.0% confidence
      	0.405375 +/- 0.00888913
      	11.1608% +/- 0.244735%
      	(Student's t, pooled s = 0.03513)
      
      However, since the mean latency corresponds to the amount of irqsoff
      processing we have to do for a CS interrupt, we only need to speed that
      up to benefit not just system latency but our own throughput.
      
      v2: Remember to defer submissions when under reset.
      v4: Only use direct submission for new requests
      v5: Be aware that with mixing direct tasklet evaluation and deferred
      tasklets, we may end up idling before running the deferred tasklet.
      v6: Remove the redudant likely() from tasklet_is_enabled(), restrict the
      annotation to reset_in_progress().
      v7: Take the full timeline.lock when enabling perf_pmu stats as the
      tasklet is no longer a valid guard. A consequence is that the stats are
      now only valid for engines also using the timeline.lock to process
      state.
      
      Testcase: igt/gem_exec_latency/*rthog*
      References: 27af5eea ("drm/i915: Move execlists irq handler to a bottom half")
      Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-9-chris@chris-wilson.co.uk
      9512f985
    • C
      drm/i915/execlists: Trust the CSB · fd8526e5
      Chris Wilson 提交于
      Now that we use the CSB stored in the CPU friendly HWSP, we do not need
      to track interrupts for when the mmio CSB registers are valid and can
      just check where we read up to last from the cached HWSP. This means we
      can forgo the atomic bit tracking from interrupt, and in the next patch
      it means we can check the CSB at any time.
      
      v2: Change the splitting inside reset_prepare, we only want to lose
      testing the interrupt in this patch, the next patch requires the change
      in locking
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-8-chris@chris-wilson.co.uk
      fd8526e5
    • C
      drm/i915/execlists: Stop storing the CSB read pointer in the mmio register · 3800cd19
      Chris Wilson 提交于
      As we now never read back our current head position from the CSB
      pointers register, and the HW itself doesn't use it to prevent
      overwriting unread CSB entries, we do not need to keep updating the
      register. As it turns out this register is not listed as being shadowed,
      and so requires forcewake -- but we haven't been taking forcewake around
      it so the writes has probably been regularly dropped. Fortuitously, we
      only read the value after a reset where it did not matter, and zero was
      the right answer (well, close enough).
      
      Mika pointed out that this was how we used to do it (accidentally!)
      before he fixed it in commit cc53699b ("drm/i915: Use masked write
      for Context Status Buffer Pointer").
      
      References: cc53699b ("drm/i915: Use masked write for Context Status Buffer Pointer")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-7-chris@chris-wilson.co.uk
      3800cd19
    • C
      drm/i915/execlists: Reset CSB write pointer after reset · f4b58f04
      Chris Wilson 提交于
      On HW reset, the HW clears the write pointer (to 0). But since it also
      writes its first CSB entry to slot 0, we need to reset the write pointer
      back to the element before (so the first entry we read is 0).
      
      This is required for the next patch, where we trust the CSB completely!
      
      v2: Use _MASKED_FIELD
      v3: Store the reset value, so that we differentiate between mmio/hwsp
      transparently and without pretense.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-6-chris@chris-wilson.co.uk
      f4b58f04
    • C
      drm/i915/execlists: Unify CSB access pointers · bc4237ec
      Chris Wilson 提交于
      Following the removal of the last workarounds, the only CSB mmio access
      is for the old vGPU interface. The mmio registers presented by vGPU do
      not require forcewake and can be treated as ordinary volatile memory,
      i.e. they behave just like the HWSP access just at a different location.
      We can reduce the CSB access to a set of read/write/buffer pointers and
      treat the various paths identically and not worry about forcewake.
      (Forcewake is nightmare for worstcase latency, and we want to process
      this all with irqsoff -- no latency allowed!)
      
      v2: Comments, comments, comments. Well, 2 bonus comments.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-5-chris@chris-wilson.co.uk
      bc4237ec
    • C
      drm/i915/execlists: Process one CSB update at a time · 8ea397fa
      Chris Wilson 提交于
      In the next patch, we will process the CSB events directly from the
      submission path, rather than only after a CS interrupt. Hence, we will
      no longer have the need for a loop until the has-interrupt bit is clear,
      and in the meantime can remove that small optimisation.
      
      v2: Tvrtko pointed out it was safer to unconditionally kick the tasklet
      after each irq, when assuming that the tasklet is called for each irq.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180628201211.13837-4-chris@chris-wilson.co.uk
      8ea397fa