1. 03 3月, 2018 3 次提交
    • C
      drm/i915/execlists: Split spinlock from its irq disabling side-effect · a3e38836
      Chris Wilson 提交于
      During reset/wedging, we have to clean up the requests on the timeline
      and flush the pending interrupt state. Currently, we are abusing the irq
      disabling of the timeline spinlock to protect the irq state in
      conjunction to the engine's timeline requests, but this is accidental
      and conflates the spinlock with the irq state. A baffling state of
      affairs for the reader.
      
      Instead, explicitly disable irqs over the critical section, and separate
      modifying the irq state from the timeline's requests.
      Suggested-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180302143246.2579-4-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      a3e38836
    • C
      drm/i915/execlists: Move irq state manipulation inside irq disabled region · aebbc2d7
      Chris Wilson 提交于
      Although this state (execlists->active and engine->irq_posted) itself is
      not protected by the engine->timeline spinlock, it does conveniently
      ensure that irqs are disabled. We can use this to protect our
      manipulation of the state and so ensure that the next IRQ to arrive sees
      consistent state and (hopefully) ignores the reset engine.
      Suggested-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180302131246.22036-1-chris@chris-wilson.co.uk
      aebbc2d7
    • C
      drm/i915: Suspend submission tasklets around wedging · 963ddd63
      Chris Wilson 提交于
      After staring hard at sequences like
      
      [   28.199013]  systemd-1       2..s. 26062228us : execlists_submission_tasklet: rcs0 cs-irq head=0 [0?], tail=1 [1?]
      [   28.199095]  systemd-1       2..s. 26062229us : execlists_submission_tasklet: rcs0 csb[1]: status=0x00000018:0x00000000, active=0x1
      [   28.199177]  systemd-1       2..s. 26062230us : execlists_submission_tasklet: rcs0 out[0]: ctx=0.1, seqno=3, prio=-1024
      [   28.199258]  systemd-1       2..s. 26062231us : execlists_submission_tasklet: rcs0 completed ctx=0
      [   28.199340]  gem_eio-829     1..s1 26066853us : execlists_submission_tasklet: rcs0 in[0]:  ctx=1.1, seqno=1, prio=0
      [   28.199421]   <idle>-0       2..s. 26066863us : execlists_submission_tasklet: rcs0 cs-irq head=1 [1?], tail=2 [2?]
      [   28.199503]   <idle>-0       2..s. 26066865us : execlists_submission_tasklet: rcs0 csb[2]: status=0x00000001:0x00000000, active=0x1
      [   28.199585]  gem_eio-829     1..s1 26067077us : execlists_submission_tasklet: rcs0 in[1]:  ctx=3.1, seqno=2, prio=0
      [   28.199667]  gem_eio-829     1..s1 26067078us : execlists_submission_tasklet: rcs0 in[0]:  ctx=1.2, seqno=1, prio=0
      [   28.199749]   <idle>-0       2..s. 26067084us : execlists_submission_tasklet: rcs0 cs-irq head=2 [2?], tail=3 [3?]
      [   28.199830]   <idle>-0       2..s. 26067085us : execlists_submission_tasklet: rcs0 csb[3]: status=0x00008002:0x00000001, active=0x1
      [   28.199912]   <idle>-0       2..s. 26067086us : execlists_submission_tasklet: rcs0 out[0]: ctx=1.2, seqno=1, prio=0
      [   28.199994]  gem_eio-829     2..s. 28246084us : execlists_submission_tasklet: rcs0 cs-irq head=3 [3?], tail=4 [4?]
      [   28.200096]  gem_eio-829     2..s. 28246088us : execlists_submission_tasklet: rcs0 csb[4]: status=0x00000014:0x00000001, active=0x5
      [   28.200178]  gem_eio-829     2..s. 28246089us : execlists_submission_tasklet: rcs0 out[0]: ctx=0.0, seqno=0, prio=0
      [   28.200260]  gem_eio-829     2..s. 28246127us : execlists_submission_tasklet: execlists_submission_tasklet:886 GEM_BUG_ON(buf[2 * head + 1] != port->context_id)
      
      the conclusion is that the only place where the ports are reset to zero,
      is from engine->cancel_requests called during i915_gem_set_wedged().
      
      The race is horrible as it results from calling set-wedged on active HW
      (the GPU reset failed) and as such we need to be careful as the HW state
      changes beneath us. Fortunately, it's the same scary conditions as
      affect normal reset, so we can reuse the same machinery to disable state
      tracking as we clobber it.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104945Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Fixes: af7a8ffa ("drm/i915: Use rcu instead of stop_machine in set_wedged")
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180302113324.23189-2-chris@chris-wilson.co.uk
      963ddd63
  2. 24 2月, 2018 1 次提交
  3. 22 2月, 2018 4 次提交
  4. 17 2月, 2018 1 次提交
  5. 16 2月, 2018 1 次提交
  6. 08 2月, 2018 2 次提交
  7. 06 2月, 2018 1 次提交
  8. 05 2月, 2018 2 次提交
  9. 03 2月, 2018 1 次提交
  10. 01 2月, 2018 1 次提交
  11. 26 1月, 2018 2 次提交
  12. 24 1月, 2018 3 次提交
  13. 23 1月, 2018 3 次提交
  14. 11 1月, 2018 1 次提交
    • C
      drm/i915: Don't adjust priority on an already signaled fence · 5005c851
      Chris Wilson 提交于
      When we retire a signaled fence, we free the dependency tree. However,
      we skip clearing the list so that if we then try to adjust the priority
      of the signaled fence, we may walk the list of freed dependencies.
      
      [ 3083.156757] ==================================================================
      [ 3083.156806] BUG: KASAN: use-after-free in execlists_schedule+0x199/0x660 [i915]
      [ 3083.156810] Read of size 8 at addr ffff8806bf20f400 by task Xorg/831
      
      [ 3083.156815] CPU: 0 PID: 831 Comm: Xorg Not tainted 4.15.0-rc6-no-psn+ #1
      [ 3083.156817] Hardware name: Notebook                         N24_25BU/N24_25BU, BIOS 5.12 02/17/2017
      [ 3083.156818] Call Trace:
      [ 3083.156823]  dump_stack+0x5c/0x7a
      [ 3083.156827]  print_address_description+0x6b/0x290
      [ 3083.156830]  kasan_report+0x28f/0x380
      [ 3083.156872]  ? execlists_schedule+0x199/0x660 [i915]
      [ 3083.156914]  execlists_schedule+0x199/0x660 [i915]
      [ 3083.156956]  ? intel_crtc_atomic_check+0x146/0x4e0 [i915]
      [ 3083.156997]  ? execlists_submit_request+0xe0/0xe0 [i915]
      [ 3083.157038]  ? i915_vma_misplaced.part.4+0x25/0xb0 [i915]
      [ 3083.157079]  ? __i915_vma_do_pin+0x7c8/0xc80 [i915]
      [ 3083.157121]  ? intel_atomic_state_alloc+0x44/0x60 [i915]
      [ 3083.157130]  ? drm_atomic_helper_page_flip+0x3e/0xb0 [drm_kms_helper]
      [ 3083.157145]  ? drm_mode_page_flip_ioctl+0x7d2/0x850 [drm]
      [ 3083.157159]  ? drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.157172]  ? drm_ioctl+0x45b/0x560 [drm]
      [ 3083.157211]  i915_gem_object_wait_priority+0x14c/0x2c0 [i915]
      [ 3083.157251]  ? i915_gem_get_aperture_ioctl+0x150/0x150 [i915]
      [ 3083.157290]  ? i915_vma_pin_fence+0x1d8/0x320 [i915]
      [ 3083.157331]  ? intel_pin_and_fence_fb_obj+0x175/0x250 [i915]
      [ 3083.157372]  ? intel_rotation_info_size+0x60/0x60 [i915]
      [ 3083.157413]  ? intel_link_compute_m_n+0x80/0x80 [i915]
      [ 3083.157428]  ? drm_dev_printk+0x1b0/0x1b0 [drm]
      [ 3083.157443]  ? drm_dev_printk+0x1b0/0x1b0 [drm]
      [ 3083.157485]  intel_prepare_plane_fb+0x2f8/0x5a0 [i915]
      [ 3083.157527]  ? intel_crtc_get_vblank_counter+0x80/0x80 [i915]
      [ 3083.157536]  drm_atomic_helper_prepare_planes+0xa0/0x1c0 [drm_kms_helper]
      [ 3083.157587]  intel_atomic_commit+0x12e/0x4e0 [i915]
      [ 3083.157605]  drm_atomic_helper_page_flip+0xa2/0xb0 [drm_kms_helper]
      [ 3083.157621]  drm_mode_page_flip_ioctl+0x7d2/0x850 [drm]
      [ 3083.157638]  ? drm_mode_cursor2_ioctl+0x10/0x10 [drm]
      [ 3083.157652]  ? drm_lease_owner+0x1a/0x30 [drm]
      [ 3083.157668]  ? drm_mode_cursor2_ioctl+0x10/0x10 [drm]
      [ 3083.157681]  drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.157696]  drm_ioctl+0x45b/0x560 [drm]
      [ 3083.157711]  ? drm_mode_cursor2_ioctl+0x10/0x10 [drm]
      [ 3083.157725]  ? drm_getstats+0x20/0x20 [drm]
      [ 3083.157729]  ? timerqueue_del+0x49/0x80
      [ 3083.157732]  ? __remove_hrtimer+0x62/0xb0
      [ 3083.157735]  ? hrtimer_try_to_cancel+0x173/0x210
      [ 3083.157738]  do_vfs_ioctl+0x13b/0x880
      [ 3083.157741]  ? ioctl_preallocate+0x140/0x140
      [ 3083.157744]  ? _raw_spin_unlock_irq+0xe/0x30
      [ 3083.157746]  ? do_setitimer+0x234/0x370
      [ 3083.157750]  ? SyS_setitimer+0x19e/0x1b0
      [ 3083.157752]  ? SyS_alarm+0x140/0x140
      [ 3083.157755]  ? __rcu_read_unlock+0x66/0x80
      [ 3083.157757]  ? __fget+0xc4/0x100
      [ 3083.157760]  SyS_ioctl+0x74/0x80
      [ 3083.157763]  entry_SYSCALL_64_fastpath+0x1a/0x7d
      [ 3083.157765] RIP: 0033:0x7f6135d0c6a7
      [ 3083.157767] RSP: 002b:00007fff01451888 EFLAGS: 00003246 ORIG_RAX: 0000000000000010
      [ 3083.157769] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f6135d0c6a7
      [ 3083.157771] RDX: 00007fff01451950 RSI: 00000000c01864b0 RDI: 000000000000000c
      [ 3083.157772] RBP: 00007f613076f600 R08: 0000000000000001 R09: 0000000000000000
      [ 3083.157773] R10: 0000000000000060 R11: 0000000000003246 R12: 0000000000000000
      [ 3083.157774] R13: 0000000000000060 R14: 000000000000001b R15: 0000000000000060
      
      [ 3083.157779] Allocated by task 831:
      [ 3083.157783]  kmem_cache_alloc+0xc0/0x200
      [ 3083.157822]  i915_gem_request_await_dma_fence+0x2c4/0x5d0 [i915]
      [ 3083.157861]  i915_gem_request_await_object+0x321/0x370 [i915]
      [ 3083.157900]  i915_gem_do_execbuffer+0x1165/0x19c0 [i915]
      [ 3083.157937]  i915_gem_execbuffer2+0x1ad/0x550 [i915]
      [ 3083.157950]  drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.157962]  drm_ioctl+0x45b/0x560 [drm]
      [ 3083.157964]  do_vfs_ioctl+0x13b/0x880
      [ 3083.157966]  SyS_ioctl+0x74/0x80
      [ 3083.157968]  entry_SYSCALL_64_fastpath+0x1a/0x7d
      
      [ 3083.157971] Freed by task 831:
      [ 3083.157973]  kmem_cache_free+0x77/0x220
      [ 3083.158012]  i915_gem_request_retire+0x72c/0xa70 [i915]
      [ 3083.158051]  i915_gem_request_alloc+0x1e9/0x8b0 [i915]
      [ 3083.158089]  i915_gem_do_execbuffer+0xa96/0x19c0 [i915]
      [ 3083.158127]  i915_gem_execbuffer2+0x1ad/0x550 [i915]
      [ 3083.158140]  drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.158153]  drm_ioctl+0x45b/0x560 [drm]
      [ 3083.158155]  do_vfs_ioctl+0x13b/0x880
      [ 3083.158156]  SyS_ioctl+0x74/0x80
      [ 3083.158158]  entry_SYSCALL_64_fastpath+0x1a/0x7d
      
      [ 3083.158162] The buggy address belongs to the object at ffff8806bf20f400
                      which belongs to the cache i915_dependency of size 64
      [ 3083.158166] The buggy address is located 0 bytes inside of
                      64-byte region [ffff8806bf20f400, ffff8806bf20f440)
      [ 3083.158168] The buggy address belongs to the page:
      [ 3083.158171] page:00000000d43decc4 count:1 mapcount:0 mapping:          (null) index:0x0
      [ 3083.158174] flags: 0x17ffe0000000100(slab)
      [ 3083.158179] raw: 017ffe0000000100 0000000000000000 0000000000000000 0000000180200020
      [ 3083.158182] raw: ffffea001afc16c0 0000000500000005 ffff880731b881c0 0000000000000000
      [ 3083.158184] page dumped because: kasan: bad access detected
      
      [ 3083.158187] Memory state around the buggy address:
      [ 3083.158190]  ffff8806bf20f300: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158192]  ffff8806bf20f380: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158195] >ffff8806bf20f400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158196]                    ^
      [ 3083.158199]  ffff8806bf20f480: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158201]  ffff8806bf20f500: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158203] ==================================================================
      Reported-by: NAlexandru Chirvasitu <achirvasub@gmail.com>
      Reported-by: NMike Keehan <mike@keehan.net>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104436
      Fixes: 1f181225 ("drm/i915/execlists: Keep request->priority for its lifetime")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Alexandru Chirvasitu <achirvasub@gmail.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Tested-by: NAlexandru Chirvasitu <achirvasub@gmail.com>
      Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180106105618.13532-1-chris@chris-wilson.co.uk
      (cherry picked from commit c218ee03)
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      5005c851
  15. 08 1月, 2018 1 次提交
    • C
      drm/i915: Don't adjust priority on an already signaled fence · c218ee03
      Chris Wilson 提交于
      When we retire a signaled fence, we free the dependency tree. However,
      we skip clearing the list so that if we then try to adjust the priority
      of the signaled fence, we may walk the list of freed dependencies.
      
      [ 3083.156757] ==================================================================
      [ 3083.156806] BUG: KASAN: use-after-free in execlists_schedule+0x199/0x660 [i915]
      [ 3083.156810] Read of size 8 at addr ffff8806bf20f400 by task Xorg/831
      
      [ 3083.156815] CPU: 0 PID: 831 Comm: Xorg Not tainted 4.15.0-rc6-no-psn+ #1
      [ 3083.156817] Hardware name: Notebook                         N24_25BU/N24_25BU, BIOS 5.12 02/17/2017
      [ 3083.156818] Call Trace:
      [ 3083.156823]  dump_stack+0x5c/0x7a
      [ 3083.156827]  print_address_description+0x6b/0x290
      [ 3083.156830]  kasan_report+0x28f/0x380
      [ 3083.156872]  ? execlists_schedule+0x199/0x660 [i915]
      [ 3083.156914]  execlists_schedule+0x199/0x660 [i915]
      [ 3083.156956]  ? intel_crtc_atomic_check+0x146/0x4e0 [i915]
      [ 3083.156997]  ? execlists_submit_request+0xe0/0xe0 [i915]
      [ 3083.157038]  ? i915_vma_misplaced.part.4+0x25/0xb0 [i915]
      [ 3083.157079]  ? __i915_vma_do_pin+0x7c8/0xc80 [i915]
      [ 3083.157121]  ? intel_atomic_state_alloc+0x44/0x60 [i915]
      [ 3083.157130]  ? drm_atomic_helper_page_flip+0x3e/0xb0 [drm_kms_helper]
      [ 3083.157145]  ? drm_mode_page_flip_ioctl+0x7d2/0x850 [drm]
      [ 3083.157159]  ? drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.157172]  ? drm_ioctl+0x45b/0x560 [drm]
      [ 3083.157211]  i915_gem_object_wait_priority+0x14c/0x2c0 [i915]
      [ 3083.157251]  ? i915_gem_get_aperture_ioctl+0x150/0x150 [i915]
      [ 3083.157290]  ? i915_vma_pin_fence+0x1d8/0x320 [i915]
      [ 3083.157331]  ? intel_pin_and_fence_fb_obj+0x175/0x250 [i915]
      [ 3083.157372]  ? intel_rotation_info_size+0x60/0x60 [i915]
      [ 3083.157413]  ? intel_link_compute_m_n+0x80/0x80 [i915]
      [ 3083.157428]  ? drm_dev_printk+0x1b0/0x1b0 [drm]
      [ 3083.157443]  ? drm_dev_printk+0x1b0/0x1b0 [drm]
      [ 3083.157485]  intel_prepare_plane_fb+0x2f8/0x5a0 [i915]
      [ 3083.157527]  ? intel_crtc_get_vblank_counter+0x80/0x80 [i915]
      [ 3083.157536]  drm_atomic_helper_prepare_planes+0xa0/0x1c0 [drm_kms_helper]
      [ 3083.157587]  intel_atomic_commit+0x12e/0x4e0 [i915]
      [ 3083.157605]  drm_atomic_helper_page_flip+0xa2/0xb0 [drm_kms_helper]
      [ 3083.157621]  drm_mode_page_flip_ioctl+0x7d2/0x850 [drm]
      [ 3083.157638]  ? drm_mode_cursor2_ioctl+0x10/0x10 [drm]
      [ 3083.157652]  ? drm_lease_owner+0x1a/0x30 [drm]
      [ 3083.157668]  ? drm_mode_cursor2_ioctl+0x10/0x10 [drm]
      [ 3083.157681]  drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.157696]  drm_ioctl+0x45b/0x560 [drm]
      [ 3083.157711]  ? drm_mode_cursor2_ioctl+0x10/0x10 [drm]
      [ 3083.157725]  ? drm_getstats+0x20/0x20 [drm]
      [ 3083.157729]  ? timerqueue_del+0x49/0x80
      [ 3083.157732]  ? __remove_hrtimer+0x62/0xb0
      [ 3083.157735]  ? hrtimer_try_to_cancel+0x173/0x210
      [ 3083.157738]  do_vfs_ioctl+0x13b/0x880
      [ 3083.157741]  ? ioctl_preallocate+0x140/0x140
      [ 3083.157744]  ? _raw_spin_unlock_irq+0xe/0x30
      [ 3083.157746]  ? do_setitimer+0x234/0x370
      [ 3083.157750]  ? SyS_setitimer+0x19e/0x1b0
      [ 3083.157752]  ? SyS_alarm+0x140/0x140
      [ 3083.157755]  ? __rcu_read_unlock+0x66/0x80
      [ 3083.157757]  ? __fget+0xc4/0x100
      [ 3083.157760]  SyS_ioctl+0x74/0x80
      [ 3083.157763]  entry_SYSCALL_64_fastpath+0x1a/0x7d
      [ 3083.157765] RIP: 0033:0x7f6135d0c6a7
      [ 3083.157767] RSP: 002b:00007fff01451888 EFLAGS: 00003246 ORIG_RAX: 0000000000000010
      [ 3083.157769] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f6135d0c6a7
      [ 3083.157771] RDX: 00007fff01451950 RSI: 00000000c01864b0 RDI: 000000000000000c
      [ 3083.157772] RBP: 00007f613076f600 R08: 0000000000000001 R09: 0000000000000000
      [ 3083.157773] R10: 0000000000000060 R11: 0000000000003246 R12: 0000000000000000
      [ 3083.157774] R13: 0000000000000060 R14: 000000000000001b R15: 0000000000000060
      
      [ 3083.157779] Allocated by task 831:
      [ 3083.157783]  kmem_cache_alloc+0xc0/0x200
      [ 3083.157822]  i915_gem_request_await_dma_fence+0x2c4/0x5d0 [i915]
      [ 3083.157861]  i915_gem_request_await_object+0x321/0x370 [i915]
      [ 3083.157900]  i915_gem_do_execbuffer+0x1165/0x19c0 [i915]
      [ 3083.157937]  i915_gem_execbuffer2+0x1ad/0x550 [i915]
      [ 3083.157950]  drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.157962]  drm_ioctl+0x45b/0x560 [drm]
      [ 3083.157964]  do_vfs_ioctl+0x13b/0x880
      [ 3083.157966]  SyS_ioctl+0x74/0x80
      [ 3083.157968]  entry_SYSCALL_64_fastpath+0x1a/0x7d
      
      [ 3083.157971] Freed by task 831:
      [ 3083.157973]  kmem_cache_free+0x77/0x220
      [ 3083.158012]  i915_gem_request_retire+0x72c/0xa70 [i915]
      [ 3083.158051]  i915_gem_request_alloc+0x1e9/0x8b0 [i915]
      [ 3083.158089]  i915_gem_do_execbuffer+0xa96/0x19c0 [i915]
      [ 3083.158127]  i915_gem_execbuffer2+0x1ad/0x550 [i915]
      [ 3083.158140]  drm_ioctl_kernel+0xa7/0xf0 [drm]
      [ 3083.158153]  drm_ioctl+0x45b/0x560 [drm]
      [ 3083.158155]  do_vfs_ioctl+0x13b/0x880
      [ 3083.158156]  SyS_ioctl+0x74/0x80
      [ 3083.158158]  entry_SYSCALL_64_fastpath+0x1a/0x7d
      
      [ 3083.158162] The buggy address belongs to the object at ffff8806bf20f400
                      which belongs to the cache i915_dependency of size 64
      [ 3083.158166] The buggy address is located 0 bytes inside of
                      64-byte region [ffff8806bf20f400, ffff8806bf20f440)
      [ 3083.158168] The buggy address belongs to the page:
      [ 3083.158171] page:00000000d43decc4 count:1 mapcount:0 mapping:          (null) index:0x0
      [ 3083.158174] flags: 0x17ffe0000000100(slab)
      [ 3083.158179] raw: 017ffe0000000100 0000000000000000 0000000000000000 0000000180200020
      [ 3083.158182] raw: ffffea001afc16c0 0000000500000005 ffff880731b881c0 0000000000000000
      [ 3083.158184] page dumped because: kasan: bad access detected
      
      [ 3083.158187] Memory state around the buggy address:
      [ 3083.158190]  ffff8806bf20f300: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158192]  ffff8806bf20f380: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158195] >ffff8806bf20f400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158196]                    ^
      [ 3083.158199]  ffff8806bf20f480: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158201]  ffff8806bf20f500: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
      [ 3083.158203] ==================================================================
      Reported-by: NAlexandru Chirvasitu <achirvasub@gmail.com>
      Reported-by: NMike Keehan <mike@keehan.net>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104436
      Fixes: 1f181225 ("drm/i915/execlists: Keep request->priority for its lifetime")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Alexandru Chirvasitu <achirvasub@gmail.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Tested-by: NAlexandru Chirvasitu <achirvasub@gmail.com>
      Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20180106105618.13532-1-chris@chris-wilson.co.uk
      c218ee03
  16. 03 1月, 2018 6 次提交
  17. 23 12月, 2017 1 次提交
    • C
      drm/i915/execlists: Show preemption progress in GEM_TRACE · 193a98dc
      Chris Wilson 提交于
      We already emit a GEM_TRACE for when we start preemption, but we lack
      one to show when the preemption is completed and we return to the regular
      queue. This is to continue the investigation into the mysterious
      
      <0>[  197.854177]   <idle>-0       1..s1 197837017us : execlists_submission_tasklet: rcs0 cs-irq head=0 [0], tail=0 [0]
      <0>[  197.854209] drv_self-6008    2.... 197837390us : reset_common_ring: rcs0 seqno=15515
      <0>[  197.854240] drv_self-6008    2.... 197837415us : reset_common_ring: bcs0 seqno=0
      <0>[  197.854270] drv_self-6008    2.... 197837443us : reset_common_ring: vcs0 seqno=0
      <0>[  197.854300] drv_self-6008    2.... 197837463us : reset_common_ring: vcs1 seqno=0
      <0>[  197.854330] drv_self-6008    2.... 197837482us : reset_common_ring: vecs0 seqno=0
      <0>[  197.854360] ksoftirq-23      2..s. 197838341us : execlists_submission_tasklet: bcs0 in[0]:  ctx=0.1, seqno=1dce7
      <0>[  197.854392]   <idle>-0       1..s1 197838347us : execlists_submission_tasklet: bcs0 cs-irq head=0 [0], tail=0 [0]
      <0>[  197.854423] ksoftirq-23      2..s. 197838354us : execlists_submission_tasklet: vcs0 in[0]:  ctx=0.1, seqno=1d027
      <0>[  197.854456] ksoftirq-23      2.Ns. 197838361us : execlists_submission_tasklet: vcs1 in[0]:  ctx=0.1, seqno=1e738
      <0>[  197.854488] ksoftirq-23      2.Ns. 197838366us : execlists_submission_tasklet: vecs0 in[0]:  ctx=0.1, seqno=235aa
      <0>[  197.854520] ksoftirq-23      2.Ns. 197838376us : execlists_submission_tasklet: rcs0 in[0]:  ctx=0.1, seqno=15518
      <0>[  197.854552]   <idle>-0       1..s1 197853285us : execlists_submission_tasklet: rcs0 cs-irq head=0 [0], tail=7 [7]
      <0>[  197.854584]   <idle>-0       1..s1 197853285us : execlists_submission_tasklet: rcs0 csb[1]: status=0x00000018:0x00000000
      <0>[  197.854616]   <idle>-0       1..s1 197853286us : execlists_submission_tasklet: rcs0 out[0]: ctx=0.0, seqno=0
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20171222132742.4272-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      193a98dc
  18. 20 12月, 2017 2 次提交
  19. 08 12月, 2017 1 次提交
  20. 29 11月, 2017 1 次提交
  21. 22 11月, 2017 2 次提交