1. 08 3月, 2019 1 次提交
  2. 06 3月, 2019 1 次提交
  3. 28 2月, 2019 1 次提交
  4. 26 2月, 2019 1 次提交
    • C
      drm/i915: Replace global_seqno with a hangcheck heartbeat seqno · 89531e7d
      Chris Wilson 提交于
      To determine whether an engine has 'stuck', we simply check whether or
      not is still on the same seqno for several seconds. To keep this simple
      mechanism intact over the loss of a global seqno, we can simply add a
      new global heartbeat seqno instead. As we cannot know the sequence in
      which requests will then be completed, we use a primitive random number
      generator instead (with a cycle long enough to not matter over an
      interval of a few thousand requests between hangcheck samples).
      
      The alternative to using a dedicated seqno on every request is to issue
      a heartbeat request and query its progress through the system. Sadly
      this requires us to reduce struct_mutex so that we can issue requests
      without requiring that bkl.
      
      v2: And without the extra CS_STALL for the hangcheck seqno -- we don't
      need strict serialisation with what comes later, we just need to be sure
      we don't write the hangcheck seqno before our batch is flushed.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190226094922.31617-1-chris@chris-wilson.co.uk
      89531e7d
  5. 21 2月, 2019 1 次提交
  6. 13 2月, 2019 1 次提交
  7. 09 2月, 2019 2 次提交
  8. 08 2月, 2019 1 次提交
  9. 06 2月, 2019 2 次提交
  10. 30 1月, 2019 2 次提交
    • C
      drm/i915: Drop fake breadcrumb irq · 789659f4
      Chris Wilson 提交于
      Missed breadcrumb detection is defunct due to the tight coupling with
      dma_fence signaling and the myriad ways we may signal fences from
      everywhere but from an interrupt, i.e. we frequently signal a fence
      before we even see its interrupt. This means that even if we miss an
      interrupt for a fence, it still is signaled before our breadcrumb
      hangcheck fires, so simplify the breadcrumb hangchecking by moving it
      into the GPU hangcheck and forgo fake interrupts.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190129205230.19056-3-chris@chris-wilson.co.uk
      789659f4
    • C
      drm/i915: Replace global breadcrumbs with per-context interrupt tracking · 52c0fdb2
      Chris Wilson 提交于
      A few years ago, see commit 688e6c72 ("drm/i915: Slaughter the
      thundering i915_wait_request herd"), the issue of handling multiple
      clients waiting in parallel was brought to our attention. The
      requirement was that every client should be woken immediately upon its
      request being signaled, without incurring any cpu overhead.
      
      To handle certain fragility of our hw meant that we could not do a
      simple check inside the irq handler (some generations required almost
      unbounded delays before we could be sure of seqno coherency) and so
      request completion checking required delegation.
      
      Before commit 688e6c72, the solution was simple. Every client
      waiting on a request would be woken on every interrupt and each would do
      a heavyweight check to see if their request was complete. Commit
      688e6c72 introduced an rbtree so that only the earliest waiter on
      the global timeline would woken, and would wake the next and so on.
      (Along with various complications to handle requests being reordered
      along the global timeline, and also a requirement for kthread to provide
      a delegate for fence signaling that had no process context.)
      
      The global rbtree depends on knowing the execution timeline (and global
      seqno). Without knowing that order, we must instead check all contexts
      queued to the HW to see which may have advanced. We trim that list by
      only checking queued contexts that are being waited on, but still we
      keep a list of all active contexts and their active signalers that we
      inspect from inside the irq handler. By moving the waiters onto the fence
      signal list, we can combine the client wakeup with the dma_fence
      signaling (a dramatic reduction in complexity, but does require the HW
      being coherent, the seqno must be visible from the cpu before the
      interrupt is raised - we keep a timer backup just in case).
      
      Having previously fixed all the issues with irq-seqno serialisation (by
      inserting delays onto the GPU after each request instead of random delays
      on the CPU after each interrupt), we can rely on the seqno state to
      perfom direct wakeups from the interrupt handler. This allows us to
      preserve our single context switch behaviour of the current routine,
      with the only downside that we lose the RT priority sorting of wakeups.
      In general, direct wakeup latency of multiple clients is about the same
      (about 10% better in most cases) with a reduction in total CPU time spent
      in the waiter (about 20-50% depending on gen). Average herd behaviour is
      improved, but at the cost of not delegating wakeups on task_prio.
      
      v2: Capture fence signaling state for error state and add comments to
      warm even the most cold of hearts.
      v3: Check if the request is still active before busywaiting
      v4: Reduce the amount of pointer misdirection with list_for_each_safe
      and using a local i915_request variable inside the loops
      v5: Add a missing pluralisation to a purely informative selftest message.
      
      References: 688e6c72 ("drm/i915: Slaughter the thundering i915_wait_request herd")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190129205230.19056-2-chris@chris-wilson.co.uk
      52c0fdb2
  11. 29 1月, 2019 1 次提交
  12. 28 1月, 2019 1 次提交
  13. 25 1月, 2019 1 次提交
  14. 24 1月, 2019 1 次提交
  15. 23 1月, 2019 2 次提交
  16. 18 1月, 2019 1 次提交
  17. 17 1月, 2019 2 次提交
  18. 15 1月, 2019 8 次提交
  19. 10 1月, 2019 2 次提交
  20. 08 1月, 2019 1 次提交
  21. 07 1月, 2019 1 次提交
  22. 02 1月, 2019 1 次提交
  23. 28 12月, 2018 2 次提交
  24. 27 12月, 2018 1 次提交
  25. 20 12月, 2018 1 次提交
  26. 17 12月, 2018 1 次提交
    • M
      drm/i915/dsc: Add Per connector debugfs node for DSC support/enable · e845f099
      Manasi Navare 提交于
      DSC can be supported per DP connector. This patch adds a per connector
      debugfs node to expose DSC support capability by the kernel.
      The same node can be used from userspace to force DSC enable.
      
      force_dsc_en written through this debugfs node is used to force
      DSC even for lower resolutions.
      
      Credits to Ville Syrjala for suggesting the proper locks to be used
      and to Lyude Paul for explaining how to use them in this context
      
      v8:
      * Add else if (ret) for drm_modeset_lock (Lyude)
      v7:
      * Get crtc, crtc_state from connector atomic state
      and add proper locks and backoff (Ville, Chris Wilson, Lyude)
      (Suggested-by: Ville Syrjala <ville.syrjala@linux.intel.com>)
      * Use %zu for printing size_t variable (Lyude)
      v6:
      * Read fec_capable only for non edp (Manasi)
      v5:
      * Name it dsc sink support and also add
      fec support in the same node (Ville)
      v4:
      * Add missed connector_status check (Manasi)
      * Create i915_dsc_support node only for Gen >=10 (manasi)
      * Access intel_dp->dsc_dpcd only if its not NULL (Manasi)
      v3:
      * Combine Force_dsc_en with this patch (Ville)
      v2:
      * Use kstrtobool_from_user to avoid explicit error checking (Lyude)
      * Rebase on drm-tip (Manasi)
      
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
      Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
      Cc: Lyude Paul <lyude@redhat.com>
      Signed-off-by: NManasi Navare <manasi.d.navare@intel.com>
      Reviewed-by: NLyude Paul <lyude@redhat.com>
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20181206005407.4698-1-manasi.d.navare@intel.com
      e845f099