1. 09 7月, 2020 4 次提交
  2. 20 6月, 2020 1 次提交
  3. 18 6月, 2020 1 次提交
  4. 16 6月, 2020 3 次提交
  5. 10 6月, 2020 2 次提交
  6. 05 6月, 2020 1 次提交
  7. 03 6月, 2020 1 次提交
  8. 01 6月, 2020 1 次提交
    • C
      drm/i915: Handle very early engine initialisation failure · 0b0b2549
      Chris Wilson 提交于
      If we fail during engine setup, we may leave some engines not yet setup.
      During the error cleanup, we have to be careful not to try and use the
      uninitialise engines before discarding them.
      
      [   16.136152] RIP: 0010:__flush_work+0x198/0x1b0
      [   16.136168] Code: ff ff 8b 0b 48 8b 53 08 83 e1 08 48 0f ba 2b 03 80 c9 f0 e9 63 ff ff ff 0f 0b 48 83 c4 48 44 89 f0 5b 5d 41 5c 41 5d 41 5e c3 <0f> 0b 45 31 f6 e9 62 ff ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f
      [   16.136186] RSP: 0018:ffffc900003bb928 EFLAGS: 00010246
      [   16.136201] RAX: 0000000000000000 RBX: ffff88844f392168 RCX: 0000000000000000
      [   16.136216] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88844f392168
      [   16.136231] RBP: ffff88844f392130 R08: 0000000000000000 R09: 0000000000000001
      [   16.136246] R10: ffff888441e31e40 R11: ffff88845e329c70 R12: ffff88844f796988
      [   16.136261] R13: ffff888441e4fb80 R14: 0000000000000001 R15: ffff88844f790000
      [   16.136388] FS:  00007fecbd208880(0000) GS:ffff88845e380000(0000) knlGS:0000000000000000
      [   16.136405] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   16.136420] CR2: 00007ff3ce748f90 CR3: 0000000457a6a001 CR4: 00000000000606e0
      [   16.136437] Call Trace:
      [   16.136456]  ? try_to_del_timer_sync+0x3a/0x50
      [   16.136529]  intel_wakeref_wait_for_idle+0x87/0xb0 [i915]
      [   16.136606]  ? intel_engines_release+0x68/0xc0 [i915]
      [   16.136680]  intel_engines_release+0x49/0xc0 [i915]
      [   16.136757]  intel_gt_init+0x2f4/0x5e0 [i915]
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200601072446.19548-1-chris@chris-wilson.co.uk
      0b0b2549
  9. 14 5月, 2020 1 次提交
    • C
      drm/i915: Show per-engine default property values in sysfs · 7a0ba6b4
      Chris Wilson 提交于
      By providing the default values configured into the kernel via sysfs, it
      is much more convenient for userspace to restore those sane defaults, or
      at least know what are considered good baseline. This is useful, for
      example, to cleanup after any failed userspace prior to commencing new
      jobs.
      
      /sys/class/drm/card0/engine/rcs0/
      ├── capabilities
      ├── class
      ├── .defaults
      │   ├── heartbeat_interval_ms
      │   ├── max_busywait_duration_ns
      │   ├── preempt_timeout_ms
      │   ├── stop_timeout_ms
      │   └── timeslice_duration_ms
      ├── heartbeat_interval_ms
      ├── instance
      ├── known_capabilities
      ├── max_busywait_duration_ns
      ├── mmio_base
      ├── name
      ├── preempt_timeout_ms
      ├── stop_timeout_ms
      └── timeslice_duration_ms
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMaciej Patelczyk <maciej.patelczyk@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200514062905.28668-1-chris@chris-wilson.co.uk
      7a0ba6b4
  10. 07 5月, 2020 1 次提交
    • C
      drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore · 220dcfc1
      Chris Wilson 提交于
      If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the
      user batch or in our own preamble, the engine raises a
      GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so
      respond to a semaphore wait by yielding the timeslice, if we have
      another context to yield to!
      
      The only real complication is that the interrupt is only generated for
      the start of the semaphore wait, and is asynchronous to our
      process_csb() -- that is, we may not have registered the timeslice before
      we see the interrupt. To ensure we don't miss a potential semaphore
      blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark
      the interrupt and apply it to the next timeslice regardless of whether it
      was active at the time.
      
      v2: We use semaphores in preempt-to-busy, within the timeslicing
      implementation itself! Ergo, when we do insert a preemption due to an
      expired timeslice, the new context may start with the missed semaphore
      flagged by the retired context and be yielded, ad infinitum. To avoid
      this, read the context id at the time of the semaphore interrupt and
      only yield if that context is still active.
      
      Fixes: 8ee36e04 ("drm/i915/execlists: Minimalistic timeslicing")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200407130811.17321-1-chris@chris-wilson.co.uk
      (cherry picked from commit c4e8ba73)
      (cherry picked from commit cd60e4ac4738a6921592c4f7baf87f9a3499f0e2)
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      220dcfc1
  11. 01 5月, 2020 1 次提交
  12. 30 4月, 2020 2 次提交
    • C
      drm/i915/gt: Always enable busy-stats for execlists · 426d0073
      Chris Wilson 提交于
      In the near future, we will utilize the busy-stats on each engine to
      approximate the C0 cycles of each, and use that as an input to a manual
      RPS mechanism. That entails having busy-stats always enabled and so we
      can remove the enable/disable routines and simplify the pmu setup. As a
      consequence of always having the stats enabled, we can also show the
      current active time via sysfs/engine/xcs/active_time_ns.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200429205446.3259-1-chris@chris-wilson.co.uk
      426d0073
    • C
      drm/i915/gt: Keep a no-frills swappable copy of the default context state · be1cb55a
      Chris Wilson 提交于
      We need to keep the default context state around to instantiate new
      contexts (aka golden rendercontext), and we also keep it pinned while
      the engine is active so that we can quickly reset a hanging context.
      However, the default contexts are large enough to merit keeping in
      swappable memory as opposed to kernel memory, so we store them inside
      shmemfs. Currently, we use the normal GEM objects to create the default
      context image, but we can throw away all but the shmemfs file.
      
      This greatly simplifies the tricky power management code which wants to
      run underneath the normal GT locking, and we definitely do not want to
      use any high level objects that may appear to recurse back into the GT.
      Though perhaps the primary advantage of the complex GEM object is that
      we aggressively cache the mapping, but here we are recreating the
      vm_area everytime time we unpark. At the worst, we add a lightweight
      cache, but first find a microbenchmark that is impacted.
      
      Having started to create some utility functions to make working with
      shmemfs objects easier, we can start putting them to wider use, where
      GEM objects are overkill, such as storing persistent error state.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Cc: Ramalingam C <ramalingam.c@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200429172429.6054-1-chris@chris-wilson.co.uk
      be1cb55a
  13. 29 4月, 2020 1 次提交
  14. 07 4月, 2020 1 次提交
    • C
      drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore · c4e8ba73
      Chris Wilson 提交于
      If we find ourselves waiting on a MI_SEMAPHORE_WAIT, either within the
      user batch or in our own preamble, the engine raises a
      GT_WAIT_ON_SEMAPHORE interrupt. We can unmask that interrupt and so
      respond to a semaphore wait by yielding the timeslice, if we have
      another context to yield to!
      
      The only real complication is that the interrupt is only generated for
      the start of the semaphore wait, and is asynchronous to our
      process_csb() -- that is, we may not have registered the timeslice before
      we see the interrupt. To ensure we don't miss a potential semaphore
      blocking forward progress (e.g. selftests/live_timeslice_preempt) we mark
      the interrupt and apply it to the next timeslice regardless of whether it
      was active at the time.
      
      v2: We use semaphores in preempt-to-busy, within the timeslicing
      implementation itself! Ergo, when we do insert a preemption due to an
      expired timeslice, the new context may start with the missed semaphore
      flagged by the retired context and be yielded, ad infinitum. To avoid
      this, read the context id at the time of the semaphore interrupt and
      only yield if that context is still active.
      
      Fixes: 8ee36e04 ("drm/i915/execlists: Minimalistic timeslicing")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200407130811.17321-1-chris@chris-wilson.co.uk
      c4e8ba73
  15. 04 4月, 2020 1 次提交
  16. 03 4月, 2020 1 次提交
    • C
      drm/i915: Keep a per-engine request pool · 43acd651
      Chris Wilson 提交于
      Add a tiny per-engine request mempool so that we should always have a
      request available for powermanagement allocations from tricky
      contexts. This reserve is expected to be only used for kernel
      contexts when barriers must be emitted [almost] without fail.
      
      The main consumer for this reserved request is expected to be engine-pm,
      for which we know that there will always be at least the previous pm
      request that we can reuse under mempressure (so there should always be
      a spare request for engine_park()).
      
      This is an alternative to using a comparatively bulky mempool, which
      requires custom handling for both our reserved allocation requirement
      and to protect our TYPESAFE_BY_RCU slab cache. The advantage of mempool
      would be that it would allow us to keep a larger per-engine request
      pool. However, converting over to mempool is straightforward should the
      need arise.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-and-tested-by: NJanusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200402184037.21630-1-chris@chris-wilson.co.uk
      43acd651
  17. 01 4月, 2020 2 次提交
  18. 26 3月, 2020 1 次提交
  19. 12 3月, 2020 1 次提交
  20. 11 3月, 2020 1 次提交
  21. 29 2月, 2020 1 次提交
  22. 22 2月, 2020 1 次提交
  23. 19 2月, 2020 1 次提交
  24. 12 2月, 2020 1 次提交
  25. 08 2月, 2020 1 次提交
  26. 01 2月, 2020 1 次提交
  27. 31 1月, 2020 1 次提交
  28. 30 1月, 2020 2 次提交
  29. 29 1月, 2020 1 次提交
  30. 25 1月, 2020 1 次提交
  31. 22 1月, 2020 1 次提交