1. 28 2月, 2019 1 次提交
    • M
      drm/amd/display: Fix reference counting for struct dc_sink. · dcd5fb82
      Mathias Fröhlich 提交于
      Reference counting in amdgpu_dm_connector for amdgpu_dm_connector::dc_sink
      and amdgpu_dm_connector::dc_em_sink as well as in dc_link::local_sink seems
      to be out of shape. Thus make reference counting consistent for these
      members and just plain increment the reference count when the variable
      gets assigned and decrement when the pointer is set to zero or replaced.
      Also simplify reference counting in selected function sopes to be sure the
      reference is released in any case. In some cases add NULL pointer check
      before dereferencing.
      At a hand full of places a comment is placed to stat that the reference
      increment happened already somewhere else.
      
      This actually fixes the following kernel bug on my system when enabling
      display core in amdgpu. There are some more similar bug reports around,
      so it probably helps at more places.
      
         kernel BUG at mm/slub.c:294!
         invalid opcode: 0000 [#1] SMP PTI
         CPU: 9 PID: 1180 Comm: Xorg Not tainted 5.0.0-rc1+ #2
         Hardware name: Supermicro X10DAi/X10DAI, BIOS 3.0a 02/05/2018
         RIP: 0010:__slab_free+0x1e2/0x3d0
         Code: 8b 54 24 30 48 89 4c 24 28 e8 da fb ff ff 4c 8b 54 24 28 85 c0 0f 85 67 fe ff ff 48 8d 65 d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 <0f> 0b 49 3b 5c 24 28 75 ab 48 8b 44 24 30 49 89 4c 24 28 49 89 44
         RSP: 0018:ffffb0978589fa90 EFLAGS: 00010246
         RAX: ffff92f12806c400 RBX: 0000000080200019 RCX: ffff92f12806c400
         RDX: ffff92f12806c400 RSI: ffffdd6421a01a00 RDI: ffff92ed2f406e80
         RBP: ffffb0978589fb40 R08: 0000000000000001 R09: ffffffffc0ee4748
         R10: ffff92f12806c400 R11: 0000000000000001 R12: ffffdd6421a01a00
         R13: ffff92f12806c400 R14: ffff92ed2f406e80 R15: ffffdd6421a01a20
         FS:  00007f4170be0ac0(0000) GS:ffff92ed2fb40000(0000) knlGS:0000000000000000
         CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
         CR2: 0000562818aaa000 CR3: 000000045745a002 CR4: 00000000003606e0
         DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
         DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
         Call Trace:
          ? drm_dbg+0x87/0x90 [drm]
          dc_stream_release+0x28/0x50 [amdgpu]
          amdgpu_dm_connector_mode_valid+0xb4/0x1f0 [amdgpu]
          drm_helper_probe_single_connector_modes+0x492/0x6b0 [drm_kms_helper]
          drm_mode_getconnector+0x457/0x490 [drm]
          ? drm_connector_property_set_ioctl+0x60/0x60 [drm]
          drm_ioctl_kernel+0xa9/0xf0 [drm]
          drm_ioctl+0x201/0x3a0 [drm]
          ? drm_connector_property_set_ioctl+0x60/0x60 [drm]
          amdgpu_drm_ioctl+0x49/0x80 [amdgpu]
          do_vfs_ioctl+0xa4/0x630
          ? __sys_recvmsg+0x83/0xa0
          ksys_ioctl+0x60/0x90
          __x64_sys_ioctl+0x16/0x20
          do_syscall_64+0x5b/0x160
          entry_SYSCALL_64_after_hwframe+0x44/0xa9
         RIP: 0033:0x7f417110809b
         Code: 0f 1e fa 48 8b 05 ed bd 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d bd bd 0c 00 f7 d8 64 89 01 48
         RSP: 002b:00007ffdd8d1c268 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
         RAX: ffffffffffffffda RBX: 0000562818a8ebc0 RCX: 00007f417110809b
         RDX: 00007ffdd8d1c2a0 RSI: 00000000c05064a7 RDI: 0000000000000012
         RBP: 00007ffdd8d1c2a0 R08: 0000562819012280 R09: 0000000000000007
         R10: 0000000000000000 R11: 0000000000000246 R12: 00000000c05064a7
         R13: 0000000000000012 R14: 0000000000000012 R15: 00007ffdd8d1c2a0
         Modules linked in: nfsv4 dns_resolver nfs lockd grace fscache fuse vfat fat amdgpu intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul chash gpu_sched crc32_pclmul snd_hda_codec_realtek ghash_clmulni_intel amd_iommu_v2 iTCO_wdt iTCO_vendor_support ttm snd_hda_codec_generic snd_hda_codec_hdmi ledtrig_audio snd_hda_intel drm_kms_helper snd_hda_codec intel_cstate snd_hda_core drm snd_hwdep snd_seq snd_seq_device intel_uncore snd_pcm intel_rapl_perf snd_timer snd soundcore ioatdma pcspkr intel_wmi_thunderbolt mxm_wmi i2c_i801 lpc_ich pcc_cpufreq auth_rpcgss sunrpc igb crc32c_intel i2c_algo_bit dca wmi hid_cherry analog gameport joydev
      
      This patch is based on agd5f/drm-next-5.1-wip. This patch does not require
      all of that, but agd5f/drm-next-5.1-wip contains at least one more dc_sink
      counting fix that I could spot.
      Signed-off-by: NMathias Fröhlich <Mathias.Froehlich@web.de>
      Reviewed-by: NLeo Li <sunpeng.li@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      dcd5fb82
  2. 20 2月, 2019 4 次提交
  3. 14 2月, 2019 1 次提交
  4. 07 2月, 2019 1 次提交
    • N
      drm/amd/display: Don't re-program planes for DPMS changes · 5062b797
      Nicholas Kazlauskas 提交于
      [Why]
      There are opt1c lock warnings and CRTC read timeouts when running the
      "igt@kms_plane@plane-position-hole-dpms-pipe-*" tests. These are
      caused by trying to reprogram planes that are not in the current
      context.
      
      DPMS off removes the stream from the context. In this case:
      
      new_crtc_state->active_changed = true
      new_crtc_state->mode_changed = false
      
      The planes are reprogrammed before the stream is removed from the
      context because stream_state->mode_changed = false.
      
      For DPMS adds the stream and planes back to the context:
      
      new_crtc_state->active_changed = true
      new_crtc_state->mode_changed = false
      
      The planes are also reprogrammed here before the stream is added to the
      context because stream_state->mode_changed = true. They were not
      previously in the current context so warnings occur here.
      
      [How]
      Set stream_state->mode_changed = true when
      new_crtc_state->active_changed = true too.
      
      This prevents reprogramming before the context is applied in DC. The
      programming will be done after the context is applied.
      Signed-off-by: NNicholas Kazlauskas <nicholas.kazlauskas@amd.com>
      Reviewed-by: NSun peng Li <Sunpeng.Li@amd.com>
      Acked-by: NBhawanpreet Lakha <Bhawanpreet.Lakha@amd.com>
      Acked-by: NTony Cheng <Tony.Cheng@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      5062b797
  5. 06 2月, 2019 11 次提交
  6. 29 1月, 2019 3 次提交
  7. 26 1月, 2019 6 次提交
  8. 17 1月, 2019 1 次提交
  9. 15 1月, 2019 11 次提交
  10. 09 1月, 2019 1 次提交
    • L
      drm/amdgpu: Don't fail resume process if resuming atomic state fails · 2d1af6a1
      Lyude Paul 提交于
      This is an ugly one unfortunately. Currently, all DRM drivers supporting
      atomic modesetting will save the state that userspace had set before
      suspending, then attempt to restore that state on resume. This probably
      worked very well at one point, like many other things, until DP MST came
      into the picture. While it's easy to restore state on normal display
      connectors that were disconnected during suspend regardless of their
      state post-resume, this can't really be done with MST because of the
      fact that setting up a downstream sink requires performing sideband
      transactions between the source and the MST hub, sending out the ACT
      packets, etc.
      
      Because of this, there isn't really a guarantee that we can restore the
      atomic state we had before suspend once we've resumed. This sucks pretty
      bad, but so far I haven't run into any compositors that this actually
      causes serious issues with. Most compositors will notice the hotplug we
      send afterwards, and then reprobe state.
      
      Since nouveau and i915 also don't fail the suspend/resume process due to
      failing to restore the atomic state, let's make amdgpu match this
      behavior. Better to resume the GPU properly, then to stop the process
      half way because of a potentially unavoidable atomic commit failure.
      
      Eventually, we'll have a real fix for this problem on the DRM level. But
      we've got some more important low-hanging fruit to deal with first.
      Signed-off-by: NLyude Paul <lyude@redhat.com>
      Reviewed-by: NHarry Wentland <harry.wentland@amd.com>
      Cc: Jerry Zuo <Jerry.Zuo@amd.com>
      Cc: <stable@vger.kernel.org> # v4.15+
      Link: https://patchwork.freedesktop.org/patch/msgid/20190108211133.32564-3-lyude@redhat.com
      2d1af6a1