1. 06 12月, 2019 6 次提交
    • C
      drm/i915/gt: Acquire a GT wakeref for the breadcrumb interrupt · 045d1fb7
      Chris Wilson 提交于
      Take a wakeref on the intel_gt specifically for the enabled breadcrumb
      interrupt so that we can safely process the mmio. If the intel_gt is
      already asleep by the time we try and setup the breadcrumb interrupt, by
      a process of elimination we know the request must have been completed
      and we can skip its enablement!
      
      <4> [1518.350005] Unclaimed write to register 0x220a8
      <4> [1518.350323] WARNING: CPU: 2 PID: 3685 at drivers/gpu/drm/i915/intel_uncore.c:1163 __unclaimed_reg_debug+0x40/0x50 [i915]
      <4> [1518.350393] Modules linked in: vgem snd_hda_codec_hdmi x86_pkg_temp_thermal i915 coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hwdep snd_hda_core btusb cdc_ether btrtl usbnet btbcm btintel r8152 snd_pcm mii bluetooth ecdh_generic ecc i2c_hid pinctrl_sunrisepoint pinctrl_intel intel_lpss_pci prime_numbers [last unloaded: vgem]
      <4> [1518.350646] CPU: 2 PID: 3685 Comm: gem_exec_parse_ Tainted: G     U            5.4.0-rc8-CI-CI_DRM_7490+ #1
      <4> [1518.350708] Hardware name: Google Caroline/Caroline, BIOS MrChromebox 08/27/2018
      <4> [1518.350946] RIP: 0010:__unclaimed_reg_debug+0x40/0x50 [i915]
      <4> [1518.350992] Code: 74 05 5b 5d 41 5c c3 45 84 e4 48 c7 c0 95 8d 47 a0 48 c7 c6 8b 8d 47 a0 48 0f 44 f0 89 ea 48 c7 c7 9e 8d 47 a0 e8 40 45 e3 e0 <0f> 0b 83 2d 27 4f 2a 00 01 5b 5d 41 5c c3 66 90 41 55 41 54 55 53
      <4> [1518.351100] RSP: 0018:ffffc900007f39c8 EFLAGS: 00010086
      <4> [1518.351140] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000006
      <4> [1518.351202] RDX: 0000000080000006 RSI: 0000000000000000 RDI: 00000000ffffffff
      <4> [1518.351249] RBP: 00000000000220a8 R08: 0000000000000000 R09: 0000000000000000
      <4> [1518.351296] R10: ffffc900007f3990 R11: ffffc900007f3868 R12: 0000000000000000
      <4> [1518.351342] R13: 00000000fefeffff R14: 0000000000000092 R15: ffff888155fea000
      <4> [1518.351391] FS:  00007fc255abfe40(0000) GS:ffff88817ab00000(0000) knlGS:0000000000000000
      <4> [1518.351445] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4> [1518.351485] CR2: 00007fc2554882d0 CR3: 0000000168ca2005 CR4: 00000000003606e0
      <4> [1518.351529] Call Trace:
      <4> [1518.351746]  fwtable_write32+0x114/0x1d0 [i915]
      <4> [1518.351795]  ? sync_file_alloc+0x80/0x80
      <4> [1518.352039]  gen8_logical_ring_enable_irq+0x30/0x50 [i915]
      <4> [1518.352295]  irq_enable.part.10+0x23/0x40 [i915]
      <4> [1518.352523]  i915_request_enable_breadcrumb+0xb5/0x330 [i915]
      <4> [1518.352575]  ? sync_file_alloc+0x80/0x80
      <4> [1518.352612]  __dma_fence_enable_signaling+0x60/0x160
      <4> [1518.352653]  ? sync_file_alloc+0x80/0x80
      <4> [1518.352685]  dma_fence_add_callback+0x44/0xd0
      <4> [1518.352726]  sync_file_poll+0x95/0xc0
      <4> [1518.352767]  do_sys_poll+0x24d/0x570
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205215842.862750-1-chris@chris-wilson.co.uk
      045d1fb7
    • C
      drm/i915: Claim vma while under closed_lock in i915_vma_parked() · 77853186
      Chris Wilson 提交于
      Remove the vma we wish to destroy from the gt->closed_list to avoid
      having two i915_vma_parked() try and free it.
      
      Fixes: aa5e4453 ("drm/i915/gem: Try to flush pending unbind events")
      References: 2850748e ("drm/i915: Pull i915_vma_pin under the vm->mutex")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205214159.829727-1-chris@chris-wilson.co.uk
      77853186
    • C
      drm/i915/gt: Trim gen6 ppgtt updates to PD cachelines · d315fe8b
      Chris Wilson 提交于
      It appears now that we have the ring TLB invalidation in place, we need
      only update the page directory cachelines that we have altered. A great
      reduction from rewriting the whole 2MiB ppgtt on every update.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205234059.1010030-1-chris@chris-wilson.co.uk
      d315fe8b
    • C
      drm/i915: Serialise i915_active_acquire() with __active_retire() · bbca083d
      Chris Wilson 提交于
      As __active_retire() does it's final atomic_dec() under the
      ref->tree_lock spinlock, in order to prevent ourselves from reusing the
      ref->cache and ref->tree as they are being destroyed, we need to
      serialise with the retirement during i915_active_acquire().
      
      [  +0.000005] kernel BUG at drivers/gpu/drm/i915/i915_active.c:157!
      [  +0.000011] invalid opcode: 0000 [#1] SMP
      [  +0.000004] CPU: 7 PID: 188 Comm: kworker/u16:4 Not tainted 5.4.0-rc8-03070-gac5e57322614 #89
      [  +0.000002] Hardware name: Razer Razer Blade Stealth 13 Late 2019/LY320, BIOS 1.02 09/10/2019
      [  +0.000082] Workqueue: events_unbound active_work [i915]
      [  +0.000059] RIP: 0010:__active_retire+0x115/0x120 [i915]
      [  +0.000003] Code: 75 28 48 8b 3d 8c 6e 1a 00 48 89 ee e8 e4 5f a5 c0 48 8b 44 24 10 65 48 33 04 25 28 00 00 00 75 0f 48 83 c4 18 5b 5d 41 5c c3 <0f> 0b 0f 0b 0f 0b e8 a0 90 87 c0 0f 1f 44 00 00 48 8b 3d 54 6e 1a
      [  +0.000002] RSP: 0018:ffffb833003f7e48 EFLAGS: 00010286
      [  +0.000003] RAX: ffff8d6e8d726d00 RBX: ffff8d6f9db4e840 RCX: 0000000000000000
      [  +0.000001] RDX: ffffffff82605930 RSI: ffff8d6f9adc4908 RDI: ffff8d6e96cefe28
      [  +0.000002] RBP: ffff8d6e96cefe00 R08: 0000000000000000 R09: ffff8d6f9ffe9a50
      [  +0.000002] R10: 0000000000000048 R11: 0000000000000018 R12: ffff8d6f9adc4930
      [  +0.000001] R13: ffff8d6f9e04fb00 R14: 0000000000000000 R15: ffff8d6f9adc4988
      [  +0.000002] FS:  0000000000000000(0000) GS:ffff8d6f9ffc0000(0000) knlGS:0000000000000000
      [  +0.000002] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  +0.000002] CR2: 000055eb5a34cf10 CR3: 000000018d609002 CR4: 0000000000760ee0
      [  +0.000002] PKRU: 55555554
      [  +0.000001] Call Trace:
      [  +0.000010]  process_one_work+0x1aa/0x350
      [  +0.000004]  worker_thread+0x4d/0x3a0
      [  +0.000004]  kthread+0xfb/0x130
      [  +0.000004]  ? process_one_work+0x350/0x350
      [  +0.000003]  ? kthread_park+0x90/0x90
      [  +0.000005]  ret_from_fork+0x1f/0x40
      Reported-by: NKenneth Graunke <kenneth@whitecape.org>
      Fixes: c9ad602f ("drm/i915: Split i915_active.mutex into an irq-safe spinlock for the rbtree")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Cc: Matthew Auld <matthew.auld@intel.com>
      Tested-by: NKenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NKenneth Graunke <kenneth@whitecape.org>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205183332.801237-1-chris@chris-wilson.co.uk
      bbca083d
    • A
      drm/i915/gt: Replace I915_READ with intel_uncore_read · 92c964ca
      Andi Shyti 提交于
      Get rid of the last remaining I915_READ in gt/ and make gt-land
      the first I915_READ-free happy island.
      Suggested-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NAndi Shyti <andi.shyti@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205164422.727968-1-chris@chris-wilson.co.uk
      92c964ca
    • C
      drm/i915/gt: Save irqstate around virtual_context_destroy · 6f7ac828
      Chris Wilson 提交于
      As virtual_context_destroy() may be called from a request signal, it may
      be called from inside an irq-off section, and so we need to do a full
      save/restore of the irq state rather than blindly re-enable irqs upon
      unlocking.
      
      <4> [110.024262] WARNING: inconsistent lock state
      <4> [110.024277] 5.4.0-rc8-CI-CI_DRM_7489+ #1 Tainted: G     U
      <4> [110.024292] --------------------------------
      <4> [110.024305] inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
      <4> [110.024323] kworker/0:0/5 [HC0[0]:SC0[0]:HE1:SE1] takes:
      <4> [110.024338] ffff88826a0c7a18 (&(&rq->lock)->rlock){?.-.}, at: i915_request_retire+0x221/0x930 [i915]
      <4> [110.024592] {IN-HARDIRQ-W} state was registered at:
      <4> [110.024612]   lock_acquire+0xa7/0x1c0
      <4> [110.024627]   _raw_spin_lock_irqsave+0x33/0x50
      <4> [110.024788]   intel_engine_breadcrumbs_irq+0x38c/0x600 [i915]
      <4> [110.024808]   irq_work_run_list+0x49/0x70
      <4> [110.024824]   irq_work_run+0x26/0x50
      <4> [110.024839]   smp_irq_work_interrupt+0x44/0x1e0
      <4> [110.024855]   irq_work_interrupt+0xf/0x20
      <4> [110.024871]   __do_softirq+0xb7/0x47f
      <4> [110.024885]   irq_exit+0xba/0xc0
      <4> [110.024898]   do_IRQ+0x83/0x160
      <4> [110.024910]   ret_from_intr+0x0/0x1d
      <4> [110.024922] irq event stamp: 172864
      <4> [110.024938] hardirqs last  enabled at (172863): [<ffffffff819ea214>] _raw_spin_unlock_irq+0x24/0x50
      <4> [110.024963] hardirqs last disabled at (172864): [<ffffffff819e9fba>] _raw_spin_lock_irq+0xa/0x40
      <4> [110.024988] softirqs last  enabled at (172812): [<ffffffff81c00385>] __do_softirq+0x385/0x47f
      <4> [110.025012] softirqs last disabled at (172797): [<ffffffff810b829a>] irq_exit+0xba/0xc0
      <4> [110.025031]
      other info that might help us debug this:
      <4> [110.025049]  Possible unsafe locking scenario:
      
      <4> [110.025065]        CPU0
      <4> [110.025075]        ----
      <4> [110.025084]   lock(&(&rq->lock)->rlock);
      <4> [110.025099]   <Interrupt>
      <4> [110.025109]     lock(&(&rq->lock)->rlock);
      <4> [110.025124]
       *** DEADLOCK ***
      
      <4> [110.025144] 4 locks held by kworker/0:0/5:
      <4> [110.025156]  #0: ffff88827588f528 ((wq_completion)events){+.+.}, at: process_one_work+0x1de/0x620
      <4> [110.025187]  #1: ffffc9000006fe78 ((work_completion)(&engine->retire_work)){+.+.}, at: process_one_work+0x1de/0x620
      <4> [110.025219]  #2: ffff88825605e270 (&kernel#2){+.+.}, at: engine_retire+0x57/0xe0 [i915]
      <4> [110.025405]  #3: ffff88826a0c7a18 (&(&rq->lock)->rlock){?.-.}, at: i915_request_retire+0x221/0x930 [i915]
      <4> [110.025634]
      stack backtrace:
      <4> [110.025653] CPU: 0 PID: 5 Comm: kworker/0:0 Tainted: G     U            5.4.0-rc8-CI-CI_DRM_7489+ #1
      <4> [110.025675] Hardware name:  /NUC7i5BNB, BIOS BNKBL357.86A.0054.2017.1025.1822 10/25/2017
      <4> [110.025856] Workqueue: events engine_retire [i915]
      <4> [110.025872] Call Trace:
      <4> [110.025891]  dump_stack+0x71/0x9b
      <4> [110.025907]  mark_lock+0x49a/0x500
      <4> [110.025926]  ? print_shortest_lock_dependencies+0x200/0x200
      <4> [110.025946]  mark_held_locks+0x49/0x70
      <4> [110.025962]  ? _raw_spin_unlock_irq+0x24/0x50
      <4> [110.025978]  lockdep_hardirqs_on+0xa2/0x1c0
      <4> [110.025995]  _raw_spin_unlock_irq+0x24/0x50
      <4> [110.026171]  virtual_context_destroy+0xc5/0x2e0 [i915]
      <4> [110.026376]  __active_retire+0xb4/0x290 [i915]
      <4> [110.026396]  dma_fence_signal_locked+0x9e/0x1b0
      <4> [110.026613]  i915_request_retire+0x451/0x930 [i915]
      <4> [110.026766]  retire_requests+0x4d/0x60 [i915]
      <4> [110.026919]  engine_retire+0x63/0xe0 [i915]
      
      Fixes: b1e3177b ("drm/i915: Coordinate i915_active with its own mutex")
      Fixes: 6d06779e ("drm/i915: Load balancing across a virtual engine")
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191205145934.663183-1-chris@chris-wilson.co.uk
      6f7ac828
  2. 05 12月, 2019 8 次提交
  3. 04 12月, 2019 16 次提交
  4. 03 12月, 2019 10 次提交