1. 23 6月, 2020 1 次提交
    • J
      drm/i915/params: switch to device specific parameters · 8a25c4be
      Jani Nikula 提交于
      Start using device specific parameters instead of module parameters for
      most things. The module parameters become the immutable initial values
      for i915 parameters. The device specific parameters in i915->params
      start life as a copy of i915_modparams. Any later changes are only
      reflected in the debugfs.
      
      The stragglers are:
      
      * i915.force_probe and i915.modeset. Needed before dev_priv is
        available. This is fine because the parameters are read-only and never
        modified.
      
      * i915.verbose_state_checks. Passing dev_priv to I915_STATE_WARN and
        I915_STATE_WARN_ON would result in massive and ugly churn. This is
        handled by not exposing the parameter via debugfs, and leaving the
        parameter writable in sysfs. This may be fixed up in follow-up work.
      
      * i915.inject_probe_failure. Only makes sense in terms of the module,
        not the device. This is handled by not exposing the parameter via
        debugfs.
      
      v2: Fix uc i915 lookup code (Michał Winiarski)
      
      Cc: Juha-Pekka Heikkilä <juha-pekka.heikkila@intel.com>
      Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Acked-by: NMichał Winiarski <michal.winiarski@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20200618150402.14022-1-jani.nikula@intel.com
      8a25c4be
  2. 05 5月, 2020 1 次提交
    • C
      drm/i915: Avoid dereferencing a dead context · 30523408
      Chris Wilson 提交于
      Once the intel_context is closed, the GEM context may be freed and so
      the link from intel_context.gem_context is invalid.
      
      <3>[  219.782944] BUG: KASAN: use-after-free in intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <3>[  219.782996] Read of size 8 at addr ffff8881d7dff0b8 by task kworker/0:1/12
      
      <4>[  219.783052] CPU: 0 PID: 12 Comm: kworker/0:1 Tainted: G     U            5.7.0-rc2-g1f3ffd7683d54-kasan_118+ #1
      <4>[  219.783055] Hardware name: System manufacturer System Product Name/Z170 PRO GAMING, BIOS 3402 04/26/2017
      <4>[  219.783105] Workqueue: events heartbeat [i915]
      <4>[  219.783109] Call Trace:
      <4>[  219.783113]  <IRQ>
      <4>[  219.783119]  dump_stack+0x96/0xdb
      <4>[  219.783177]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783182]  print_address_description.constprop.6+0x16/0x310
      <4>[  219.783239]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783295]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783300]  __kasan_report+0x137/0x190
      <4>[  219.783359]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783366]  kasan_report+0x32/0x50
      <4>[  219.783426]  intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783481]  execlists_reset+0x39c/0x13d0 [i915]
      <4>[  219.783494]  ? mark_held_locks+0x9e/0xe0
      <4>[  219.783546]  ? execlists_hold+0xfc0/0xfc0 [i915]
      <4>[  219.783551]  ? lockdep_hardirqs_on+0x348/0x5f0
      <4>[  219.783557]  ? _raw_spin_unlock_irqrestore+0x34/0x60
      <4>[  219.783606]  ? execlists_submission_tasklet+0x118/0x3a0 [i915]
      <4>[  219.783615]  tasklet_action_common.isra.14+0x13b/0x410
      <4>[  219.783623]  ? __do_softirq+0x1e4/0x9a7
      <4>[  219.783630]  __do_softirq+0x226/0x9a7
      <4>[  219.783643]  do_softirq_own_stack+0x2a/0x40
      <4>[  219.783647]  </IRQ>
      <4>[  219.783692]  ? heartbeat+0x3e2/0x10f0 [i915]
      <4>[  219.783696]  do_softirq.part.13+0x49/0x50
      <4>[  219.783700]  __local_bh_enable_ip+0x1a2/0x1e0
      <4>[  219.783748]  heartbeat+0x409/0x10f0 [i915]
      <4>[  219.783801]  ? __live_idle_pulse+0x9f0/0x9f0 [i915]
      <4>[  219.783806]  ? lock_acquire+0x1ac/0x8a0
      <4>[  219.783811]  ? process_one_work+0x811/0x1870
      <4>[  219.783827]  ? rcu_read_lock_sched_held+0x9c/0xd0
      <4>[  219.783832]  ? rcu_read_lock_bh_held+0xb0/0xb0
      <4>[  219.783836]  ? _raw_spin_unlock_irq+0x1f/0x40
      <4>[  219.783845]  process_one_work+0x8ca/0x1870
      <4>[  219.783848]  ? lock_acquire+0x1ac/0x8a0
      <4>[  219.783852]  ? worker_thread+0x1d0/0xb80
      <4>[  219.783864]  ? pwq_dec_nr_in_flight+0x2c0/0x2c0
      <4>[  219.783870]  ? do_raw_spin_lock+0x129/0x290
      <4>[  219.783886]  worker_thread+0x82/0xb80
      <4>[  219.783895]  ? __kthread_parkme+0xaf/0x1b0
      <4>[  219.783902]  ? process_one_work+0x1870/0x1870
      <4>[  219.783906]  kthread+0x34e/0x420
      <4>[  219.783911]  ? kthread_create_on_node+0xc0/0xc0
      <4>[  219.783918]  ret_from_fork+0x3a/0x50
      
      <3>[  219.783950] Allocated by task 1264:
      <4>[  219.783975]  save_stack+0x19/0x40
      <4>[  219.783978]  __kasan_kmalloc.constprop.3+0xa0/0xd0
      <4>[  219.784029]  i915_gem_create_context+0xa2/0xab8 [i915]
      <4>[  219.784081]  i915_gem_context_create_ioctl+0x1fa/0x450 [i915]
      <4>[  219.784085]  drm_ioctl_kernel+0x1d8/0x270
      <4>[  219.784088]  drm_ioctl+0x676/0x930
      <4>[  219.784092]  ksys_ioctl+0xb7/0xe0
      <4>[  219.784096]  __x64_sys_ioctl+0x6a/0xb0
      <4>[  219.784100]  do_syscall_64+0x94/0x530
      <4>[  219.784103]  entry_SYSCALL_64_after_hwframe+0x49/0xb3
      
      <3>[  219.784120] Freed by task 12:
      <4>[  219.784141]  save_stack+0x19/0x40
      <4>[  219.784145]  __kasan_slab_free+0x130/0x180
      <4>[  219.784148]  kmem_cache_free_bulk+0x1bd/0x500
      <4>[  219.784152]  kfree_rcu_work+0x1d8/0x890
      <4>[  219.784155]  process_one_work+0x8ca/0x1870
      <4>[  219.784158]  worker_thread+0x82/0xb80
      <4>[  219.784162]  kthread+0x34e/0x420
      <4>[  219.784165]  ret_from_fork+0x3a/0x50
      
      Fixes: 2e46a2a0 ("drm/i915: Use explicit flag to mark unreachable intel_context")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200428090255.10035-1-chris@chris-wilson.co.uk
      (cherry picked from commit 24aac336)
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      30523408
  3. 30 4月, 2020 1 次提交
    • C
      drm/i915/gt: Keep a no-frills swappable copy of the default context state · be1cb55a
      Chris Wilson 提交于
      We need to keep the default context state around to instantiate new
      contexts (aka golden rendercontext), and we also keep it pinned while
      the engine is active so that we can quickly reset a hanging context.
      However, the default contexts are large enough to merit keeping in
      swappable memory as opposed to kernel memory, so we store them inside
      shmemfs. Currently, we use the normal GEM objects to create the default
      context image, but we can throw away all but the shmemfs file.
      
      This greatly simplifies the tricky power management code which wants to
      run underneath the normal GT locking, and we definitely do not want to
      use any high level objects that may appear to recurse back into the GT.
      Though perhaps the primary advantage of the complex GEM object is that
      we aggressively cache the mapping, but here we are recreating the
      vm_area everytime time we unpark. At the worst, we add a lightweight
      cache, but first find a microbenchmark that is impacted.
      
      Having started to create some utility functions to make working with
      shmemfs objects easier, we can start putting them to wider use, where
      GEM objects are overkill, such as storing persistent error state.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Cc: Ramalingam C <ramalingam.c@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NMatthew Auld <matthew.auld@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200429172429.6054-1-chris@chris-wilson.co.uk
      be1cb55a
  4. 29 4月, 2020 1 次提交
    • C
      drm/i915: Avoid dereferencing a dead context · 24aac336
      Chris Wilson 提交于
      Once the intel_context is closed, the GEM context may be freed and so
      the link from intel_context.gem_context is invalid.
      
      <3>[  219.782944] BUG: KASAN: use-after-free in intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <3>[  219.782996] Read of size 8 at addr ffff8881d7dff0b8 by task kworker/0:1/12
      
      <4>[  219.783052] CPU: 0 PID: 12 Comm: kworker/0:1 Tainted: G     U            5.7.0-rc2-g1f3ffd7683d54-kasan_118+ #1
      <4>[  219.783055] Hardware name: System manufacturer System Product Name/Z170 PRO GAMING, BIOS 3402 04/26/2017
      <4>[  219.783105] Workqueue: events heartbeat [i915]
      <4>[  219.783109] Call Trace:
      <4>[  219.783113]  <IRQ>
      <4>[  219.783119]  dump_stack+0x96/0xdb
      <4>[  219.783177]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783182]  print_address_description.constprop.6+0x16/0x310
      <4>[  219.783239]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783295]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783300]  __kasan_report+0x137/0x190
      <4>[  219.783359]  ? intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783366]  kasan_report+0x32/0x50
      <4>[  219.783426]  intel_engine_coredump_alloc+0x1bc3/0x2250 [i915]
      <4>[  219.783481]  execlists_reset+0x39c/0x13d0 [i915]
      <4>[  219.783494]  ? mark_held_locks+0x9e/0xe0
      <4>[  219.783546]  ? execlists_hold+0xfc0/0xfc0 [i915]
      <4>[  219.783551]  ? lockdep_hardirqs_on+0x348/0x5f0
      <4>[  219.783557]  ? _raw_spin_unlock_irqrestore+0x34/0x60
      <4>[  219.783606]  ? execlists_submission_tasklet+0x118/0x3a0 [i915]
      <4>[  219.783615]  tasklet_action_common.isra.14+0x13b/0x410
      <4>[  219.783623]  ? __do_softirq+0x1e4/0x9a7
      <4>[  219.783630]  __do_softirq+0x226/0x9a7
      <4>[  219.783643]  do_softirq_own_stack+0x2a/0x40
      <4>[  219.783647]  </IRQ>
      <4>[  219.783692]  ? heartbeat+0x3e2/0x10f0 [i915]
      <4>[  219.783696]  do_softirq.part.13+0x49/0x50
      <4>[  219.783700]  __local_bh_enable_ip+0x1a2/0x1e0
      <4>[  219.783748]  heartbeat+0x409/0x10f0 [i915]
      <4>[  219.783801]  ? __live_idle_pulse+0x9f0/0x9f0 [i915]
      <4>[  219.783806]  ? lock_acquire+0x1ac/0x8a0
      <4>[  219.783811]  ? process_one_work+0x811/0x1870
      <4>[  219.783827]  ? rcu_read_lock_sched_held+0x9c/0xd0
      <4>[  219.783832]  ? rcu_read_lock_bh_held+0xb0/0xb0
      <4>[  219.783836]  ? _raw_spin_unlock_irq+0x1f/0x40
      <4>[  219.783845]  process_one_work+0x8ca/0x1870
      <4>[  219.783848]  ? lock_acquire+0x1ac/0x8a0
      <4>[  219.783852]  ? worker_thread+0x1d0/0xb80
      <4>[  219.783864]  ? pwq_dec_nr_in_flight+0x2c0/0x2c0
      <4>[  219.783870]  ? do_raw_spin_lock+0x129/0x290
      <4>[  219.783886]  worker_thread+0x82/0xb80
      <4>[  219.783895]  ? __kthread_parkme+0xaf/0x1b0
      <4>[  219.783902]  ? process_one_work+0x1870/0x1870
      <4>[  219.783906]  kthread+0x34e/0x420
      <4>[  219.783911]  ? kthread_create_on_node+0xc0/0xc0
      <4>[  219.783918]  ret_from_fork+0x3a/0x50
      
      <3>[  219.783950] Allocated by task 1264:
      <4>[  219.783975]  save_stack+0x19/0x40
      <4>[  219.783978]  __kasan_kmalloc.constprop.3+0xa0/0xd0
      <4>[  219.784029]  i915_gem_create_context+0xa2/0xab8 [i915]
      <4>[  219.784081]  i915_gem_context_create_ioctl+0x1fa/0x450 [i915]
      <4>[  219.784085]  drm_ioctl_kernel+0x1d8/0x270
      <4>[  219.784088]  drm_ioctl+0x676/0x930
      <4>[  219.784092]  ksys_ioctl+0xb7/0xe0
      <4>[  219.784096]  __x64_sys_ioctl+0x6a/0xb0
      <4>[  219.784100]  do_syscall_64+0x94/0x530
      <4>[  219.784103]  entry_SYSCALL_64_after_hwframe+0x49/0xb3
      
      <3>[  219.784120] Freed by task 12:
      <4>[  219.784141]  save_stack+0x19/0x40
      <4>[  219.784145]  __kasan_slab_free+0x130/0x180
      <4>[  219.784148]  kmem_cache_free_bulk+0x1bd/0x500
      <4>[  219.784152]  kfree_rcu_work+0x1d8/0x890
      <4>[  219.784155]  process_one_work+0x8ca/0x1870
      <4>[  219.784158]  worker_thread+0x82/0xb80
      <4>[  219.784162]  kthread+0x34e/0x420
      <4>[  219.784165]  ret_from_fork+0x3a/0x50
      
      Fixes: 2e46a2a0 ("drm/i915: Use explicit flag to mark unreachable intel_context")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20200428090255.10035-1-chris@chris-wilson.co.uk
      24aac336
  5. 25 4月, 2020 1 次提交
  6. 08 4月, 2020 1 次提交
  7. 18 2月, 2020 1 次提交
  8. 17 2月, 2020 1 次提交
  9. 16 2月, 2020 1 次提交
  10. 14 2月, 2020 1 次提交
  11. 11 2月, 2020 1 次提交
  12. 30 1月, 2020 1 次提交
  13. 29 1月, 2020 1 次提交
  14. 28 1月, 2020 1 次提交
  15. 22 1月, 2020 1 次提交
  16. 10 1月, 2020 4 次提交
  17. 23 12月, 2019 1 次提交
  18. 20 12月, 2019 1 次提交
  19. 09 12月, 2019 1 次提交
  20. 25 11月, 2019 1 次提交
  21. 18 11月, 2019 1 次提交
  22. 14 11月, 2019 1 次提交
    • B
      drm/i915: Avoid atomic context for error capture · 48715f70
      Bruce Chang 提交于
      io_mapping_map_atomic/kmap_atomic are occasionally taken in error capture
      (if there is no aperture preallocated for the use of error capture), but
      the error capture and compression routines are now run in normal
      context:
      
      <3> [113.316247] BUG: sleeping function called from invalid context at mm/page_alloc.c:4653
      <3> [113.318190] in_atomic(): 1, irqs_disabled(): 0, pid: 678, name: debugfs_test
      <4> [113.319900] no locks held by debugfs_test/678.
      <3> [113.321002] Preemption disabled at:
      <4> [113.321130] [<ffffffffa02506d4>] i915_error_object_create+0x494/0x610 [i915]
      <4> [113.327259] Call Trace:
      <4> [113.327871] dump_stack+0x67/0x9b
      <4> [113.328683] ___might_sleep+0x167/0x250
      <4> [113.329618] __alloc_pages_nodemask+0x26b/0x1110
      <4> [113.334614] pool_alloc.constprop.19+0x14/0x60 [i915]
      <4> [113.335951] compress_page+0x7c/0x100 [i915]
      <4> [113.337110] i915_error_object_create+0x4bd/0x610 [i915]
      <4> [113.338515] i915_capture_gpu_state+0x384/0x1680 [i915]
      
      However, it is not a good idea to run the slow compression inside atomic
      context, so we choose not to.
      
      Fixes: 895d8ebe ("drm/i915: error capture with no ggtt slot")
      Signed-off-by: NBruce Chang <yu.bruce.chang@intel.com>
      Reviewed-by: NBrian Welty <brian.welty@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191113231104.24208-1-yu.bruce.chang@intel.com
      48715f70
  23. 07 11月, 2019 1 次提交
  24. 30 10月, 2019 2 次提交
  25. 29 10月, 2019 1 次提交
  26. 26 10月, 2019 1 次提交
  27. 24 10月, 2019 1 次提交
  28. 04 10月, 2019 2 次提交
    • C
      drm/i915: Remove logical HW ID · 2935ed53
      Chris Wilson 提交于
      With the introduction of ctx->engines[] we allow multiple logical
      contexts to be used on the same engine (e.g. with virtual engines).
      According to bspec, aach logical context requires a unique tag in order
      for context-switching to occur correctly between them. [Simple
      experiments show that it is not so easy to trick the HW into performing
      a lite-restore with matching logical IDs, though my memory from early
      Broadwell experiments do suggest that it should be generating
      lite-restores.]
      
      We only need to keep a unique tag for the active lifetime of the
      context, and for as long as we need to identify that context. The HW
      uses the tag to determine if it should use a lite-restore (why not the
      LRCA?) and passes the tag back for various status identifies. The only
      status we need to track is for OA, so when using perf, we assign the
      specific context a unique tag.
      
      v2: Calculate required number of tags to fill ELSP.
      
      Fixes: 976b55f0 ("drm/i915: Allow a context to define its set of engines")
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111895Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-14-chris@chris-wilson.co.uk
      2935ed53
    • C
      drm/i915: Coordinate i915_active with its own mutex · b1e3177b
      Chris Wilson 提交于
      Forgo the struct_mutex serialisation for i915_active, and interpose its
      own mutex handling for active/retire.
      
      This is a multi-layered sleight-of-hand. First, we had to ensure that no
      active/retire callbacks accidentally inverted the mutex ordering rules,
      nor assumed that they were themselves serialised by struct_mutex. More
      challenging though, is the rule over updating elements of the active
      rbtree. Instead of the whole i915_active now being serialised by
      struct_mutex, allocations/rotations of the tree are serialised by the
      i915_active.mutex and individual nodes are serialised by the caller
      using the i915_timeline.mutex (we need to use nested spinlocks to
      interact with the dma_fence callback lists).
      
      The pain point here is that instead of a single mutex around execbuf, we
      now have to take a mutex for active tracker (one for each vma, context,
      etc) and a couple of spinlocks for each fence update. The improvement in
      fine grained locking allowing for multiple concurrent clients
      (eventually!) should be worth it in typical loads.
      
      v2: Add some comments that barely elucidate anything :(
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
      b1e3177b
  29. 10 9月, 2019 1 次提交
  30. 30 8月, 2019 1 次提交
  31. 24 8月, 2019 1 次提交
  32. 19 8月, 2019 1 次提交
  33. 15 8月, 2019 1 次提交
  34. 13 8月, 2019 1 次提交
  35. 09 8月, 2019 1 次提交