1. 31 10月, 2017 1 次提交
  2. 27 10月, 2017 1 次提交
  3. 16 10月, 2017 1 次提交
  4. 07 10月, 2017 1 次提交
  5. 06 10月, 2017 1 次提交
  6. 05 10月, 2017 1 次提交
  7. 19 9月, 2017 1 次提交
  8. 04 9月, 2017 1 次提交
  9. 15 8月, 2017 1 次提交
  10. 15 6月, 2017 3 次提交
  11. 26 5月, 2017 1 次提交
  12. 03 5月, 2017 1 次提交
    • C
      drm/i915: Squash repeated awaits on the same fence · 47979480
      Chris Wilson 提交于
      Track the latest fence waited upon on each context, and only add a new
      asynchronous wait if the new fence is more recent than the recorded
      fence for that context. This requires us to filter out unordered
      timelines, which are noted by DMA_FENCE_NO_CONTEXT. However, in the
      absence of a universal identifier, we have to use our own
      i915->mm.unordered_timeline token.
      
      v2: Throw around the debug crutches
      v3: Inline the likely case of the pre-allocation cache being full.
      v4: Drop the pre-allocation support, we can lose the most recent fence
      in case of allocation failure -- it just means we may emit more awaits
      than strictly necessary but will not break.
      v5: Trim allocation size for leaf nodes, they only need an array of u32
      not pointers.
      v6: Create mock_timeline to tidy selftest writing
      v7: s/intel_timeline_sync_get/intel_timeline_sync_is_later/ (Tvrtko)
      v8: Prune the stale sync points when we idle.
      v9: Include a small benchmark in the kselftests
      v10: Separate the idr implementation into its own compartment. (Tvrkto)
      v11: Refactor igt_sync kselftests to avoid deep nesting (Tvrkto)
      v12: __sync_leaf_idx() to assert that p->height is 0 when checking leaves
      v13: kselftests to investigate struct i915_syncmap itself (Tvrtko)
      v14: Foray into ascii art graphs
      v15: Take into account that the random lookup/insert does 2 prng calls,
      not 1, when benchmarking, and use for_each_set_bit() (Tvrtko)
      v16: Improved ascii art
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170503093924.5320-4-chris@chris-wilson.co.uk
      47979480
  13. 07 3月, 2017 1 次提交
  14. 22 2月, 2017 1 次提交
  15. 14 2月, 2017 1 次提交
    • C
      drm/i915: Provide a hook for selftests · 953c7f82
      Chris Wilson 提交于
      Some pieces of code are independent of hardware but are very tricky to
      exercise through the normal userspace ABI or via debugfs hooks. Being
      able to create mock unit tests and execute them through CI is vital.
      Start by adding a central point where we can execute unit tests and
      a parameter to enable them. This is disabled by default as the
      expectation is that these tests will occasionally explode.
      
      To facilitate integration with igt, any parameter beginning with
      i915.igt__ is interpreted as a subtest executable independently via
      igt/drv_selftest.
      
      Two classes of selftests are recognised: mock unit tests and integration
      tests. Mock unit tests are run as soon as the module is loaded, before
      the device is probed. At that point there is no driver instantiated and
      all hw interactions must be "mocked". This is very useful for writing
      universal tests to exercise code not typically run on a broad range of
      architectures. Alternatively, you can hook into the live selftests and
      run when the device has been instantiated - hw interactions are real.
      
      v2: Add a macro for compiling conditional code for mock objects inside
      real objects.
      v3: Differentiate between mock unit tests and late integration test.
      v4: List the tests in natural order, use igt to sort after modparam.
      v5: s/late/live/
      v6: s/unsigned long/unsigned int/
      v7: Use igt_ prefixes for long helpers.
      v8: Deobfuscate macros overriding functions, stop using -I$(src)
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170213171558.20942-1-chris@chris-wilson.co.uk
      953c7f82
  16. 09 2月, 2017 1 次提交
  17. 25 1月, 2017 1 次提交
  18. 19 1月, 2017 1 次提交
    • A
      drm/i915/huc: Add HuC fw loading support · bd132858
      Anusha Srivatsa 提交于
      The HuC loading process is similar to GuC. The intel_uc_fw_fetch()
      is used for both cases.
      
      HuC loading needs to be before GuC loading. The WOPCM setting must
      be done early before loading any of them.
      
      v2: rebased on-top of drm-intel-nightly.
          removed if(HAS_GUC()) before the guc call. (D.Gordon)
          update huc_version number of format.
      v3: rebased to drm-intel-nightly, changed the file name format to
          match the one in the huc package.
          Changed dev->dev_private to to_i915()
      v4: moved function back to where it was.
          change wait_for_atomic to wait_for.
      v5: rebased. Changed the year in the copyright message to reflect
      the right year.Correct the comments,remove the unwanted WARN message,
      replace drm_gem_object_unreference() with i915_gem_object_put().Make the
      prototypes in intel_huc.h non-extern.
      v6: rebased. Update the file construction done by HuC. It is similar to
      GuC.Adopted the approach used in-
      https://patchwork.freedesktop.org/patch/104355/ <Tvrtko Ursulin>
      v7: Change dev to dev_priv in macro definition.
      Corrected comments.
      v8: rebased on top of drm-tip. Updated functions intel_huc_load(),
      intel_huc_init() and intel_uc_fw_fetch() to accept dev_priv instead of
      dev. Moved contents of intel_huc.h to intel_uc.h.
      v9: change SKL_FW_ to SKL_HUC_FW_. Add intel_ prefix to guc_wopcm_size().
      Remove unwanted checks in intel_uc.h. Rename huc_fw in struct intel_huc to
      simply fw to avoid redundency.
      v10: rebased. Correct comments. Make intel_huc_fini() accept dev_priv
      instead of dev like intel_huc_init() and intel_huc_load().Move definition
      to i915_guc_reg.h from intel_uc.h. Clean DMA_CTRL bits after HuC DMA
      transfer in huc_ucode_xfer() instead of guc_ucode_xfer(). Add suitable
      WARNs to give extra info.
      v11: rebased. Add proper bias for HuC and make sure there are
      asserts on failure by using guc_ggtt_offset_vma(). Introduce
      intel_huc.c and remove intel_huc_loader.c since it has functions that
      do more than just loading.Correct year in copyright.
      v12: remove invalidates that are not required anymore.
      
      Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
      Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
      Tested-by: NXiang Haihao <haihao.xiang@intel.com>
      Signed-off-by: NAnusha Srivatsa <anusha.srivatsa@intel.com>
      Signed-off-by: NAlex Dai <yu.dai@intel.com>
      Signed-off-by: NPeter Antoine <peter.antoine@intel.com>
      Reviewed-by: NMichal Wajdeczko <michal.wajdeczko@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1484755558-1234-1-git-send-email-anusha.srivatsa@intel.com
      bd132858
  19. 18 1月, 2017 1 次提交
  20. 13 12月, 2016 1 次提交
  21. 26 11月, 2016 1 次提交
  22. 22 11月, 2016 2 次提交
  23. 11 11月, 2016 1 次提交
  24. 02 11月, 2016 1 次提交
  25. 29 10月, 2016 2 次提交
  26. 18 10月, 2016 1 次提交
  27. 12 10月, 2016 1 次提交
  28. 09 9月, 2016 1 次提交
  29. 20 8月, 2016 1 次提交
  30. 12 8月, 2016 1 次提交
    • C
      drm/i915: Use SSE4.1 movntdqa to accelerate reads from WC memory · 0b1de5d5
      Chris Wilson 提交于
      This patch provides the infrastructure for performing a 16-byte aligned
      read from WC memory using non-temporal instructions introduced with sse4.1.
      Using movntdqa we can bypass the CPU caches and read directly from memory
      and ignoring the page attributes set on the CPU PTE i.e. negating the
      impact of an otherwise UC access. Copying using movntdqa from WC is almost
      as fast as reading from WB memory, modulo the possibility of both hitting
      the CPU cache or leaving the data in the CPU cache for the next consumer.
      (The CPU cache itself my be flushed for the region of the movntdqa and on
      later access the movntdqa reads from a separate internal buffer for the
      cacheline.) The write back to the memory is however cached.
      
      This will be used in later patches to accelerate accessing WC memory.
      
      v2: Report whether the accelerated copy is successful/possible.
      v3: Function alignment override was only necessary when using the
      function target("sse4.1") - which is not necessary for emitting movntdqa
      from __asm__.
      v4: Improve notes on CPU cache behaviour vs non-temporal stores.
      v5: Fix byte offsets for unrolled moves.
      v6: Find all remaining typos of "movntqda", use kernel_fpu_begin.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Akash Goel <akash.goel@intel.com>
      Cc: Damien Lespiau <damien.lespiau@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1471001999-17787-2-git-send-email-chris@chris-wilson.co.uk
      0b1de5d5
  31. 04 8月, 2016 1 次提交
    • C
      drm/i915: Refactor activity tracking for requests · fa545cbf
      Chris Wilson 提交于
      With the introduction of requests, we amplified the number of atomic
      refcounted objects we use and update every execbuffer; from none to
      several references, and a set of references that need to be changed. We
      also introduced interesting side-effects in the order of retiring
      requests and objects.
      
      Instead of independently tracking the last request for an object, track
      the active objects for each request. The object will reside in the
      buffer list of its most recent active request and so we reduce the kref
      interchange to a list_move. Now retirements are entirely driven by the
      request, dramatically simplifying activity tracking on the object
      themselves, and removing the ambiguity between retiring objects and
      retiring requests.
      
      Furthermore with the consolidation of managing the activity tracking
      centrally, we can look forward to using RCU to enable lockless lookup of
      the current active requests for an object. In the future, we will be
      able to query the status or wait upon rendering to an object without
      even touching the struct_mutex BKL.
      
      All told, less code, simpler and faster, and more extensible.
      
      v2: Add a typedef for the function pointer for convenience later.
      v3: Make the noop retirement callback explicit. Allow passing NULL to
      the init_request_active() which is expanded to a common noop function.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1470293567-10811-16-git-send-email-chris@chris-wilson.co.uk
      fa545cbf
  32. 20 7月, 2016 1 次提交
  33. 14 7月, 2016 1 次提交
  34. 05 7月, 2016 1 次提交
  35. 02 7月, 2016 1 次提交
    • C
      drm/i915: Slaughter the thundering i915_wait_request herd · 688e6c72
      Chris Wilson 提交于
      One particularly stressful scenario consists of many independent tasks
      all competing for GPU time and waiting upon the results (e.g. realtime
      transcoding of many, many streams). One bottleneck in particular is that
      each client waits on its own results, but every client is woken up after
      every batchbuffer - hence the thunder of hooves as then every client must
      do its heavyweight dance to read a coherent seqno to see if it is the
      lucky one.
      
      Ideally, we only want one client to wake up after the interrupt and
      check its request for completion. Since the requests must retire in
      order, we can select the first client on the oldest request to be woken.
      Once that client has completed his wait, we can then wake up the
      next client and so on. However, all clients then incur latency as every
      process in the chain may be delayed for scheduling - this may also then
      cause some priority inversion. To reduce the latency, when a client
      is added or removed from the list, we scan the tree for completed
      seqno and wake up all the completed waiters in parallel.
      
      Using igt/benchmarks/gem_latency, we can demonstrate this effect. The
      benchmark measures the number of GPU cycles between completion of a
      batch and the client waking up from a call to wait-ioctl. With many
      concurrent waiters, with each on a different request, we observe that
      the wakeup latency before the patch scales nearly linearly with the
      number of waiters (before external factors kick in making the scaling much
      worse). After applying the patch, we can see that only the single waiter
      for the request is being woken up, providing a constant wakeup latency
      for every operation. However, the situation is not quite as rosy for
      many waiters on the same request, though to the best of my knowledge this
      is much less likely in practice. Here, we can observe that the
      concurrent waiters incur extra latency from being woken up by the
      solitary bottom-half, rather than directly by the interrupt. This
      appears to be scheduler induced (having discounted adverse effects from
      having a rbtree walk/erase in the wakeup path), each additional
      wake_up_process() costs approximately 1us on big core. Another effect of
      performing the secondary wakeups from the first bottom-half is the
      incurred delay this imposes on high priority threads - rather than
      immediately returning to userspace and leaving the interrupt handler to
      wake the others.
      
      To offset the delay incurred with additional waiters on a request, we
      could use a hybrid scheme that did a quick read in the interrupt handler
      and dequeued all the completed waiters (incurring the overhead in the
      interrupt handler, not the best plan either as we then incur GPU
      submission latency) but we would still have to wake up the bottom-half
      every time to do the heavyweight slow read. Or we could only kick the
      waiters on the seqno with the same priority as the current task (i.e. in
      the realtime waiter scenario, only it is woken up immediately by the
      interrupt and simply queues the next waiter before returning to userspace,
      minimising its delay at the expense of the chain, and also reducing
      contention on its scheduler runqueue). This is effective at avoid long
      pauses in the interrupt handler and at avoiding the extra latency in
      realtime/high-priority waiters.
      
      v2: Convert from a kworker per engine into a dedicated kthread for the
      bottom-half.
      v3: Rename request members and tweak comments.
      v4: Use a per-engine spinlock in the breadcrumbs bottom-half.
      v5: Fix race in locklessly checking waiter status and kicking the task on
      adding a new waiter.
      v6: Fix deciding when to force the timer to hide missing interrupts.
      v7: Move the bottom-half from the kthread to the first client process.
      v8: Reword a few comments
      v9: Break the busy loop when the interrupt is unmasked or has fired.
      v10: Comments, unnecessary churn, better debugging from Tvrtko
      v11: Wake all completed waiters on removing the current bottom-half to
      reduce the latency of waking up a herd of clients all waiting on the
      same request.
      v12: Rearrange missed-interrupt fault injection so that it works with
      igt/drv_missed_irq_hang
      v13: Rename intel_breadcrumb and friends to intel_wait in preparation
      for signal handling.
      v14: RCU commentary, assert_spin_locked
      v15: Hide BUG_ON behind the compiler; report on gem_latency findings.
      v16: Sort seqno-groups by priority so that first-waiter has the highest
      task priority (and so avoid priority inversion).
      v17: Add waiters to post-mortem GPU hang state.
      v18: Return early for a completed wait after acquiring the spinlock.
      Avoids adding ourselves to the tree if the is already complete, and
      skips the awkward question of why we don't do completion wakeups for
      waits earlier than or equal to ourselves.
      v19: Prepare for init_breadcrumbs to fail. Later patches may want to
      allocate during init, so be prepared to propagate back the error code.
      
      Testcase: igt/gem_concurrent_blit
      Testcase: igt/benchmarks/gem_latency
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com>
      Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Dave Gordon <david.s.gordon@intel.com>
      Cc: "Goel, Akash" <akash.goel@intel.com>
      Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> #v18
      Link: http://patchwork.freedesktop.org/patch/msgid/1467390209-3576-6-git-send-email-chris@chris-wilson.co.uk
      688e6c72
  36. 24 6月, 2016 1 次提交