1. 05 8月, 2016 1 次提交
    • C
      drm/i915: Enable i915_gem_wait_for_idle() without holding struct_mutex · dcff85c8
      Chris Wilson 提交于
      The principal motivation for this was to try and eliminate the
      struct_mutex from i915_gem_suspend - but we still need to hold the mutex
      current for the i915_gem_context_lost(). (The issue there is that there
      may be an indirect lockdep cycle between cpu_hotplug (i.e. suspend) and
      struct_mutex via the stop_machine().) For the moment, enabling last
      request tracking for the engine, allows us to do busyness checking and
      waiting without requiring the struct_mutex - which is useful in its own
      right.
      
      As a side-effect of having a robust means for tracking engine busyness,
      we can replace our other busyness heuristic, that of comparing against
      the last submitted seqno. For paranoid reasons, we have a semi-ordered
      check of that seqno inside the hangchecker, which we can now improve to
      an ordered check of the engine's busyness (removing a locked xchg in the
      process).
      
      v2: Pass along "bool interruptible" as being unlocked we cannot rely on
      i915->mm.interruptible being stable or even under our control.
      v3: Replace check Ironlake i915_gpu_busy() with the common precalculated value
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1470388464-28458-6-git-send-email-chris@chris-wilson.co.uk
      dcff85c8
  2. 03 8月, 2016 1 次提交
  3. 27 7月, 2016 1 次提交
  4. 20 7月, 2016 2 次提交
  5. 15 7月, 2016 1 次提交
  6. 14 7月, 2016 1 次提交
  7. 08 7月, 2016 1 次提交
  8. 06 7月, 2016 1 次提交
  9. 05 7月, 2016 3 次提交
  10. 04 7月, 2016 3 次提交
  11. 02 7月, 2016 10 次提交
    • C
      drm/i915: Remove debug noise on detecting fault-injection of missed interrupts · c5a7b5aa
      Chris Wilson 提交于
      Since the tests can and do explicitly check debugfs/i915_ring_missed_irqs
      for the handling of a "missed interrupt", adding it to the dmesg at INFO
      is just noise. When it happens for real, we still class it as an ERROR.
      
      Note that I have chose to remove it entirely because when we detect the
      "missed interrupt" is irrelevant and the message contains no more
      information than we glean from looking in debugfs.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-20-git-send-email-chris@chris-wilson.co.uk
      c5a7b5aa
    • C
      drm/i915: Move the get/put irq locking into the caller · 31bb59cc
      Chris Wilson 提交于
      With only a single callsite for intel_engine_cs->irq_get and ->irq_put,
      we can reduce the code size by moving the common preamble into the
      caller, and we can also eliminate the reference counting.
      
      For completeness, as we are no longer doing reference counting on irq,
      rename the get/put vfunctions to enable/disable respectively and are
      able to review the use of posting reads. We only require the
      serialisation with hardware when enabling the interrupt (i.e. so we
      cannot miss an interrupt by going to sleep before the hardware truly
      enables it).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-18-git-send-email-chris@chris-wilson.co.uk
      31bb59cc
    • C
      drm/i915: Only apply one barrier after a breadcrumb interrupt is posted · 3d5564e9
      Chris Wilson 提交于
      If we flag the seqno as potentially stale upon receiving an interrupt,
      we can use that information to reduce the frequency that we apply the
      heavyweight coherent seqno read (i.e. if we wake up a chain of waiters).
      
      v2: Use cmpxchg to replace READ_ONCE/WRITE_ONCE for more explicit
      control of the ordering wrt to interrupt generation and interrupt
      checking in the bottom-half.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-14-git-send-email-chris@chris-wilson.co.uk
      3d5564e9
    • C
      drm/i915: Add a delay between interrupt and inspecting the final seqno (ilk) · f8973c21
      Chris Wilson 提交于
      On Ironlake, there is no command nor register to ensure that the write
      from a MI_STORE command is completed (and coherent on the CPU) before the
      command parser continues. This means that the ordering between the seqno
      write and the subsequent user interrupt is undefined (like gen6+). So to
      ensure that the seqno write is completed after the final user interrupt
      we need to delay the read sufficiently to allow the write to complete.
      This delay is undefined by the bspec, and empirically requires 75us even
      though a register read combined with a clflush is less than 500ns. Hence,
      the delay is due to an on-chip buffer rather than the latency of the write
      to memory.
      
      Note that the render ring controls this by filling the PIPE_CONTROL fifo
      with stalling commands that force the earliest pipe-control with the
      seqno to be completed before the command parser continues. Given that we
      need a barrier operation for BSD, we may as well forgo the extra
      per-batch latency by using a common per-interrupt barrier.
      
      Studying the impact of adding the usleep shows that in both sequences of
      and individual synchronous no-op batches is negligible for the media
      engine (where the write now is unordered with the interrupt). Converting
      the render engine over from the current glutton of pie-controls over to
      the per-interrupt delays speeds up both the sequential and individual
      synchronous no-ops by 20% and 60%, respectively. This speed up holds
      even when looking at the throughput of small copies (4KiB->4MiB), both
      serial and synchronous, by about 20%. This is because despite adding a
      significant delay to the interrupt, in all likelihood we will see the
      seqno write without having to apply the barrier (only in the rare corner
      cases where the write is delayed on the last required is the delay
      necessary).
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94307
      Testcase: igt/gem_sync #ilk
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-12-git-send-email-chris@chris-wilson.co.uk
      f8973c21
    • C
      drm/i915: Use HWS for seqno tracking everywhere · 1b7744e7
      Chris Wilson 提交于
      By using the same address for storing the HWS on every platform, we can
      remove the platform specific vfuncs and reduce the get-seqno routine to
      a single read of a cached memory location.
      
      v2: Fix semaphore_passed() to look at the signaling engine (not the
      waiter's)
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-8-git-send-email-chris@chris-wilson.co.uk
      1b7744e7
    • C
      drm/i915: Slaughter the thundering i915_wait_request herd · 688e6c72
      Chris Wilson 提交于
      One particularly stressful scenario consists of many independent tasks
      all competing for GPU time and waiting upon the results (e.g. realtime
      transcoding of many, many streams). One bottleneck in particular is that
      each client waits on its own results, but every client is woken up after
      every batchbuffer - hence the thunder of hooves as then every client must
      do its heavyweight dance to read a coherent seqno to see if it is the
      lucky one.
      
      Ideally, we only want one client to wake up after the interrupt and
      check its request for completion. Since the requests must retire in
      order, we can select the first client on the oldest request to be woken.
      Once that client has completed his wait, we can then wake up the
      next client and so on. However, all clients then incur latency as every
      process in the chain may be delayed for scheduling - this may also then
      cause some priority inversion. To reduce the latency, when a client
      is added or removed from the list, we scan the tree for completed
      seqno and wake up all the completed waiters in parallel.
      
      Using igt/benchmarks/gem_latency, we can demonstrate this effect. The
      benchmark measures the number of GPU cycles between completion of a
      batch and the client waking up from a call to wait-ioctl. With many
      concurrent waiters, with each on a different request, we observe that
      the wakeup latency before the patch scales nearly linearly with the
      number of waiters (before external factors kick in making the scaling much
      worse). After applying the patch, we can see that only the single waiter
      for the request is being woken up, providing a constant wakeup latency
      for every operation. However, the situation is not quite as rosy for
      many waiters on the same request, though to the best of my knowledge this
      is much less likely in practice. Here, we can observe that the
      concurrent waiters incur extra latency from being woken up by the
      solitary bottom-half, rather than directly by the interrupt. This
      appears to be scheduler induced (having discounted adverse effects from
      having a rbtree walk/erase in the wakeup path), each additional
      wake_up_process() costs approximately 1us on big core. Another effect of
      performing the secondary wakeups from the first bottom-half is the
      incurred delay this imposes on high priority threads - rather than
      immediately returning to userspace and leaving the interrupt handler to
      wake the others.
      
      To offset the delay incurred with additional waiters on a request, we
      could use a hybrid scheme that did a quick read in the interrupt handler
      and dequeued all the completed waiters (incurring the overhead in the
      interrupt handler, not the best plan either as we then incur GPU
      submission latency) but we would still have to wake up the bottom-half
      every time to do the heavyweight slow read. Or we could only kick the
      waiters on the seqno with the same priority as the current task (i.e. in
      the realtime waiter scenario, only it is woken up immediately by the
      interrupt and simply queues the next waiter before returning to userspace,
      minimising its delay at the expense of the chain, and also reducing
      contention on its scheduler runqueue). This is effective at avoid long
      pauses in the interrupt handler and at avoiding the extra latency in
      realtime/high-priority waiters.
      
      v2: Convert from a kworker per engine into a dedicated kthread for the
      bottom-half.
      v3: Rename request members and tweak comments.
      v4: Use a per-engine spinlock in the breadcrumbs bottom-half.
      v5: Fix race in locklessly checking waiter status and kicking the task on
      adding a new waiter.
      v6: Fix deciding when to force the timer to hide missing interrupts.
      v7: Move the bottom-half from the kthread to the first client process.
      v8: Reword a few comments
      v9: Break the busy loop when the interrupt is unmasked or has fired.
      v10: Comments, unnecessary churn, better debugging from Tvrtko
      v11: Wake all completed waiters on removing the current bottom-half to
      reduce the latency of waking up a herd of clients all waiting on the
      same request.
      v12: Rearrange missed-interrupt fault injection so that it works with
      igt/drv_missed_irq_hang
      v13: Rename intel_breadcrumb and friends to intel_wait in preparation
      for signal handling.
      v14: RCU commentary, assert_spin_locked
      v15: Hide BUG_ON behind the compiler; report on gem_latency findings.
      v16: Sort seqno-groups by priority so that first-waiter has the highest
      task priority (and so avoid priority inversion).
      v17: Add waiters to post-mortem GPU hang state.
      v18: Return early for a completed wait after acquiring the spinlock.
      Avoids adding ourselves to the tree if the is already complete, and
      skips the awkward question of why we don't do completion wakeups for
      waits earlier than or equal to ourselves.
      v19: Prepare for init_breadcrumbs to fail. Later patches may want to
      allocate during init, so be prepared to propagate back the error code.
      
      Testcase: igt/gem_concurrent_blit
      Testcase: igt/benchmarks/gem_latency
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com>
      Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Dave Gordon <david.s.gordon@intel.com>
      Cc: "Goel, Akash" <akash.goel@intel.com>
      Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> #v18
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-6-git-send-email-chris@chris-wilson.co.uk
      688e6c72
    • C
      drm/i915: Separate GPU hang waitqueue from advance · 1f15b76f
      Chris Wilson 提交于
      Currently __i915_wait_request uses a per-engine wait_queue_t for the dual
      purpose of waking after the GPU advances or for waking after an error.
      In the future, we may add even more wake sources and require greater
      separation, but for now we can conceptually simplify wakeups by separating
      the two sources. In particular, this allows us to use different wait-queues
      (e.g. one on the engine advancement, a global one for errors and one on
      each requests) without any hassle.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-5-git-send-email-chris@chris-wilson.co.uk
      1f15b76f
    • C
      drm/i915: Make queueing the hangcheck work inline · 26a02b8f
      Chris Wilson 提交于
      Since the function is a small wrapper around schedule_delayed_work(),
      move it inline to remove the function call overhead for the principle
      caller.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-4-git-send-email-chris@chris-wilson.co.uk
      26a02b8f
    • C
      drm/i915: Remove the dedicated hangcheck workqueue · 77740025
      Chris Wilson 提交于
      The queue only ever contains at most one item and has no special flags.
      It is just a very simple wrapper around the system-wq - a complication
      with no benefits.
      
      v2: Use the system_long_wq as we may wish to capture the error state
      after detecting the hang - which may take a bit of time.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-3-git-send-email-chris@chris-wilson.co.uk
      77740025
    • C
      drm/i915: Delay queuing hangcheck to wait-request · 05535726
      Chris Wilson 提交于
      We can forgo queuing the hangcheck from the start of every request to
      until we wait upon a request. This reduces the overhead of every
      request, but may increase the latency of detecting a hang. However, if
      nothing every waits upon a hang, did it ever hang? It also improves the
      robustness of the wait-request by ensuring that the hangchecker is
      indeed running before we sleep indefinitely (and thereby ensuring that
      we never actually sleep forever waiting for a dead GPU).
      
      As pointed out by Tvrtko, it is possible for a GPU hang to go unnoticed
      for as long as nobody is waiting for the GPU. Though this rare, during
      that time we may be consuming more power than if we had promptly
      recovered, and in the most extreme case we may exhaust all memory before
      forcing the hangcheck. Something to be wary off in future.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-2-git-send-email-chris@chris-wilson.co.uk
      05535726
  12. 06 6月, 2016 1 次提交
  13. 01 6月, 2016 2 次提交
    • D
      drm/i915: Revert async unpin and nonblocking atomic commit · e42aeef1
      Daniel Vetter 提交于
      This reverts the following patches:
      
      d55dbd06 drm/i915: Allow nonblocking update of pageflips.
      15c86bdb drm/i915: Check for unpin correctness.
      95c2ccdc Reapply "drm/i915: Avoid stalling on pending flips for legacy cursor updates"
      a6747b73 drm/i915: Make unpin async.
      03f476e1 drm/i915: Prepare connectors for nonblocking checks.
      2099deff drm/i915: Pass atomic states to fbc update functions.
      ee7171af drm/i915: Remove reset_counter from intel_crtc.
      2ee004f7 drm/i915: Remove queue_flip pointer.
      b8d2afae drm/i915: Remove use_mmio_flip kernel parameter.
      8dd634d9 drm/i915: Remove cs based page flip support.
      143f73b3 drm/i915: Rework intel_crtc_page_flip to be almost atomic, v3.
      84fc494b drm/i915: Add the exclusive fence to plane_state.
      6885843a drm/i915: Convert flip_work to a list.
      aa420ddd drm/i915: Allow mmio updates on all platforms, v2.
      afee4d87 Revert "drm/i915: Avoid stalling on pending flips for legacy cursor updates"
      
      "drm/i915: Allow nonblocking update of pageflips" should have been
      split up, misses a proper commit message and seems to cause issues in
      the legacy page_flip path as demonstrated by kms_flip.
      
      "drm/i915: Make unpin async" doesn't handle the unthrottled cursor
      updates correctly, leading to an apparent pin count leak. This is
      caught by the WARN_ON in i915_gem_object_do_pin which screams if we
      have more than DRM_I915_GEM_OBJECT_MAX_PIN_COUNT pins.
      
      Unfortuantely we can't just revert these two because this patch series
      came with a built-in bisect breakage in the form of temporarily
      removing the unthrottled cursor update hack for legacy cursor ioctl.
      Therefore there's no other option than to revert the entire pile :(
      
      There's one tiny conflict in intel_drv.h due to other patches, nothing
      serious.
      
      Normally I'd wait a bit longer with doing a maintainer revert, but
      since the minimal set of patches we need to revert (due to the bisect
      breakage) is so big, time is running out fast. And very soon
      (especially after a few attempts at fixing issues) it'll be really
      hard to revert things cleanly.
      
      Lessons learned:
      - Not a good idea to rush the review (done by someone fairly new to
        the area) and not make sure domain experts had a chance to read it.
      
      - Patches should be properly split up. I only looked at the two
        patches that should be reverted in detail, but both look like the
        mix up different things in one patch.
      
      - Patches really should have proper commit messages. Especially when
        doing more than one thing, and especially when touching critical and
        tricky core code.
      
      - Building a patch series and r-b stamping it when it has a built-in
        bisect breakage is not a good idea.
      
      - I also think we need to stop building up technical debt by
        postponing atomic igt testcases even longer. I think it's clear that
        there's enough corner cases in this beast that we really need to
        have the testcases _before_ the next step lands.
      
      (cherry picked from commit 5a21b665
      from drm-intel-next-queeud)
      
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
      Cc: John Harrison <John.C.Harrison@Intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Acked-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Acked-by: NDave Airlie <airlied@redhat.com>
      Acked-by: NJani Nikula <jani.nikula@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      e42aeef1
    • S
      drm/i915: Update GEN6_PMINTRMSK setup with GuC enabled · 1800ad25
      Sagar Arun Kamble 提交于
      On Loading, GuC sets PM interrupts routing (bit 31) and clears ARAT
      expired interrupt (bit 9). Host turbo also updates this register
      in RPS flows. This patch ensures bit 31 and bit 9 setup by GuC persists.
      ARAT timer interrupt is needed in GuC for various features. It also
      facilitates halting GuC and hence achieving RC6. PM interrupt routing
      will not impact RPS interrupt reception by host as GuC will redirect
      them.
      This patch fixes igt test pm_rc6_residency that was failing with guc
      load/submission enabled. Tested with SKL GuC v6.1 and BXT GuC v5.1 and v8.7.
      
      v2: i915_irq/i915_pm decoupling from intel_guc. (ChrisW)
      
      v3: restructuring the mask update and rebase w.r.t Ville's patch. (ChrisW)
      
      v4: Updating the pm_intr_keep during direct_interrupts_to_guc. (Sagar)
      
      Cc: Chris Harris <chris.harris@intel.com>
      Cc: Zhe Wang <zhe1.wang@intel.com>
      Cc: Deepak S <deepak.s@intel.com>
      Cc: Satyanantha, Rama Gopal M <rama.gopal.m.satyanantha@intel.com>
      Cc: Akash Goel <akash.goel@intel.com>
      Testcase: igt/pm_rc6_residency
      Signed-off-by: NSagar Arun Kamble <sagar.a.kamble@intel.com>
      Tested-by: NMatt Roper <matthew.d.roper@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NMatt Roper <matthew.d.roper@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1464683307-19475-1-git-send-email-sagar.a.kamble@intel.com
      1800ad25
  14. 25 5月, 2016 1 次提交
    • D
      drm/i915: Revert async unpin and nonblocking atomic commit · 5a21b665
      Daniel Vetter 提交于
      This reverts the following patches:
      
      d55dbd06 drm/i915: Allow nonblocking update of pageflips.
      15c86bdb drm/i915: Check for unpin correctness.
      95c2ccdc Reapply "drm/i915: Avoid stalling on pending flips for legacy cursor updates"
      a6747b73 drm/i915: Make unpin async.
      03f476e1 drm/i915: Prepare connectors for nonblocking checks.
      2099deff drm/i915: Pass atomic states to fbc update functions.
      ee7171af drm/i915: Remove reset_counter from intel_crtc.
      2ee004f7 drm/i915: Remove queue_flip pointer.
      b8d2afae drm/i915: Remove use_mmio_flip kernel parameter.
      8dd634d9 drm/i915: Remove cs based page flip support.
      143f73b3 drm/i915: Rework intel_crtc_page_flip to be almost atomic, v3.
      84fc494b drm/i915: Add the exclusive fence to plane_state.
      6885843a drm/i915: Convert flip_work to a list.
      aa420ddd drm/i915: Allow mmio updates on all platforms, v2.
      afee4d87 Revert "drm/i915: Avoid stalling on pending flips for legacy cursor updates"
      
      "drm/i915: Allow nonblocking update of pageflips" should have been
      split up, misses a proper commit message and seems to cause issues in
      the legacy page_flip path as demonstrated by kms_flip.
      
      "drm/i915: Make unpin async" doesn't handle the unthrottled cursor
      updates correctly, leading to an apparent pin count leak. This is
      caught by the WARN_ON in i915_gem_object_do_pin which screams if we
      have more than DRM_I915_GEM_OBJECT_MAX_PIN_COUNT pins.
      
      Unfortuantely we can't just revert these two because this patch series
      came with a built-in bisect breakage in the form of temporarily
      removing the unthrottled cursor update hack for legacy cursor ioctl.
      Therefore there's no other option than to revert the entire pile :(
      
      There's one tiny conflict in intel_drv.h due to other patches, nothing
      serious.
      
      Normally I'd wait a bit longer with doing a maintainer revert, but
      since the minimal set of patches we need to revert (due to the bisect
      breakage) is so big, time is running out fast. And very soon
      (especially after a few attempts at fixing issues) it'll be really
      hard to revert things cleanly.
      
      Lessons learned:
      - Not a good idea to rush the review (done by someone fairly new to
        the area) and not make sure domain experts had a chance to read it.
      
      - Patches should be properly split up. I only looked at the two
        patches that should be reverted in detail, but both look like the
        mix up different things in one patch.
      
      - Patches really should have proper commit messages. Especially when
        doing more than one thing, and especially when touching critical and
        tricky core code.
      
      - Building a patch series and r-b stamping it when it has a built-in
        bisect breakage is not a good idea.
      
      - I also think we need to stop building up technical debt by
        postponing atomic igt testcases even longer. I think it's clear that
        there's enough corner cases in this beast that we really need to
        have the testcases _before_ the next step lands.
      
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Patrik Jakobsson <patrik.jakobsson@linux.intel.com>
      Cc: John Harrison <John.C.Harrison@Intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Acked-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Acked-by: NDave Airlie <airlied@redhat.com>
      Acked-by: NJani Nikula <jani.nikula@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      5a21b665
  15. 23 5月, 2016 1 次提交
  16. 19 5月, 2016 4 次提交
  17. 11 5月, 2016 2 次提交
  18. 09 5月, 2016 2 次提交
    • C
      drm/i915: Store a i915 backpointer from engine, and use it · c033666a
      Chris Wilson 提交于
         text	   data	    bss	    dec	    hex	filename
      6309351	3578714	 696320	10584385	 a18141	vmlinux
      6308391	3578714	 696320	10583425	 a17d81	vmlinux
      
      Almost 1KiB of code reduction.
      
      v2: More s/INTEL_INFO()->gen/INTEL_GEN()/ and IS_GENx() conversions
      
         text	   data	    bss	    dec	    hex	filename
      6304579	3578778	 696320	10579677	 a16edd	vmlinux
      6303427	3578778	 696320	10578525	 a16a5d	vmlinux
      
      Now over 1KiB!
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1462545621-30125-3-git-send-email-chris@chris-wilson.co.uk
      c033666a
    • T
      drm/i915: Small display interrupt handlers tidy · 91d14251
      Tvrtko Ursulin 提交于
      I have noticed some of our interrupt handlers use both dev and
      dev_priv while they could get away with only dev_priv in the
      huge majority of cases.
      
      Tidying that up had a cascading effect on changing functions
      prototypes, so relatively big churn factor, but I think it is
      for the better.
      
      For example even where changes cascade out of i915_irq.c, for
      functions prefixed with intel_, genX_ or <plat>_, it makes more
      sense to take dev_priv directly anyway.
      
      This allows us to eliminate local variables and intermixed usage
      of dev and dev_priv where only one is good enough.
      
      End result is shrinkage of both source and the resulting binary.
      
      i915.ko:
      
       - .text         000b0899
       + .text         000b0619
      
      Or if we look at the Gen8 display irq chain:
      
       -00000000000006ad t gen8_irq_handler
       +0000000000000663 t gen8_irq_handler
         -0000000000000028 T intel_opregion_asle_intr
         +0000000000000024 T intel_opregion_asle_intr
         -000000000000008c t ilk_hpd_irq_handler
         +000000000000007f t ilk_hpd_irq_handler
         -0000000000000116 T intel_check_page_flip
         +0000000000000112 T intel_check_page_flip
         -000000000000011a T intel_prepare_page_flip
         +0000000000000119 T intel_prepare_page_flip
         -0000000000000014 T intel_finish_page_flip_plane
         +0000000000000013 T intel_finish_page_flip_plane
         -0000000000000053 t hsw_pipe_crc_irq_handler
         +000000000000004c t hsw_pipe_crc_irq_handler
         -000000000000022e t cpt_irq_handler
         +0000000000000213 t cpt_irq_handler
      
      So small shrinkage but it is all fast paths so doesn't harm.
      
      Situation is similar in other interrupt handlers as well.
      
      v2: Tidy intel_queue_rps_boost_for_request as well. (Chris Wilson)
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      91d14251
  19. 20 4月, 2016 1 次提交
  20. 14 4月, 2016 1 次提交