1. 25 8月, 2016 1 次提交
  2. 11 8月, 2016 1 次提交
  3. 05 5月, 2016 1 次提交
  4. 03 5月, 2016 3 次提交
    • N
      drm/radeon: don't include RADEON_HPD_NONE in HPD IRQ enable bitsets · b2c0cbd6
      Nicolai Stange 提交于
      The values of all but the RADEON_HPD_NONE members of the radeon_hpd_id
      enum transform 1:1 into bit positions within the 'enabled' bitset as
      assembled by evergreen_hpd_init():
      
        enabled |= 1 << radeon_connector->hpd.hpd;
      
      However, if ->hpd.hpd happens to equal RADEON_HPD_NONE == 0xff, UBSAN
      reports
      
        UBSAN: Undefined behaviour in drivers/gpu/drm/radeon/evergreen.c:1867:16
        shift exponent 255 is too large for 32-bit type 'int'
        [...]
        Call Trace:
         [<ffffffff818c4d35>] dump_stack+0xbc/0x117
         [<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169
         [<ffffffff819411bb>] ubsan_epilogue+0xd/0x4e
         [<ffffffff81941cbc>] __ubsan_handle_shift_out_of_bounds+0x1fb/0x254
         [<ffffffffa0ba7f2e>] ? atom_execute_table+0x3e/0x50 [radeon]
         [<ffffffff81941ac1>] ? __ubsan_handle_load_invalid_value+0x158/0x158
         [<ffffffffa0b87700>] ? radeon_get_pll_use_mask+0x130/0x130 [radeon]
         [<ffffffff81219930>] ? wake_up_klogd_work_func+0x60/0x60
         [<ffffffff8121a35e>] ? vprintk_default+0x3e/0x60
         [<ffffffffa0c603c4>] evergreen_hpd_init+0x274/0x2d0 [radeon]
         [<ffffffffa0c603c4>] ? evergreen_hpd_init+0x274/0x2d0 [radeon]
         [<ffffffffa0bd196e>] radeon_modeset_init+0x8ce/0x18d0 [radeon]
         [<ffffffffa0b71d86>] radeon_driver_load_kms+0x186/0x350 [radeon]
         [<ffffffffa03b6b16>] drm_dev_register+0xc6/0x100 [drm]
         [<ffffffffa03bc8c4>] drm_get_pci_dev+0xe4/0x490 [drm]
         [<ffffffff814b83f0>] ? kfree+0x220/0x370
         [<ffffffffa0b687c2>] radeon_pci_probe+0x112/0x140 [radeon]
         [...]
        =====================================================================
        radeon 0000:01:00.0: No connectors reported connected with modes
      
      At least on x86, there should be no user-visible impact as there
      
        1 << 0xff == 1 << (0xff & 31) == 1 << 31
      
      holds and 31 > RADEON_MAX_HPD_PINS. Thus, this patch is a cosmetic one.
      
      All of the above applies analogously to evergreen_hpd_fini(),
      r100_hpd_init(), r100_hpd_fini(), r600_hpd_init(), r600_hpd_fini(),
      rs600_hpd_init() and rs600_hpd_fini()
      
      Silence UBSAN by checking ->hpd.hpd for RADEON_HPD_NONE before oring it
      into the 'enabled' bitset in the *_init()- or the 'disabled' bitset in
      the *_fini()-functions respectively.
      Signed-off-by: NNicolai Stange <nicstange@gmail.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      b2c0cbd6
    • J
      drm/radeon: allow to force hard GPU reset. · 71fe2899
      Jérome Glisse 提交于
      In some cases, like when freezing for hibernation, we need to be
      able to force hard reset even if no engine are stuck. This patch
      add a bool option to current asic reset callback to allow to force
      hard reset on asic that supports it.
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NJérôme Glisse <jglisse@redhat.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Christian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      71fe2899
    • J
      drm/radeon: consolidate evergreen uvd initialization and startup code. · d78d6f39
      Jérome Glisse 提交于
      This match the exact same control flow as existing code. It just
      use goto instead of multiple levels of if/else. It also clarify
      early initialization failures by clearing rdev->has_uvd doing so
      does not change end result from hardware point of view, it only
      avoids printing more error messages down the line and thus only
      the original error is reported.
      Acked-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NJérôme Glisse <jglisse@redhat.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Christian König <christian.koenig@amd.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      d78d6f39
  5. 28 4月, 2016 1 次提交
  6. 17 3月, 2016 1 次提交
  7. 19 12月, 2015 1 次提交
    • M
      drm/radeon: Fixup hw vblank counter/ts for new drm_update_vblank_count() (v2) · c55d21ea
      Mario Kleiner 提交于
      commit 4dfd6486 "drm: Use vblank timestamps to guesstimate how many
      vblanks were missed" introduced in Linux 4.4-rc1 makes the drm core
      more fragile to drivers which don't update hw vblank counters and
      vblank timestamps in sync with firing of the vblank irq and
      essentially at leading edge of vblank.
      
      This exposed a problem with radeon-kms/amdgpu-kms which do not
      satisfy above requirements:
      
      The vblank irq fires a few scanlines before start of vblank, but
      programmed pageflips complete at start of vblank and
      vblank timestamps update at start of vblank, whereas the
      hw vblank counter increments only later, at start of vsync.
      
      This leads to problems like off by one errors for vblank counter
      updates, vblank counters apparently going backwards or vblank
      timestamps apparently having time going backwards. The net result
      is stuttering of graphics in games, or little hangs, as well as
      total failure of timing sensitive applications.
      
      See bug #93147 for an example of the regression on Linux 4.4-rc:
      
      https://bugs.freedesktop.org/show_bug.cgi?id=93147
      
      This patch tries to align all above events better from the
      viewpoint of the drm core / of external callers to fix the problem:
      
      1. The apparent start of vblank is shifted a few scanlines earlier,
      so the vblank irq now always happens after start of this extended
      vblank interval and thereby drm_update_vblank_count() always samples
      the updated vblank count and timestamp of the new vblank interval.
      
      To achieve this, the reporting of scanout positions by
      radeon_get_crtc_scanoutpos() now operates as if the vblank starts
      radeon_crtc->lb_vblank_lead_lines before the real start of the hw
      vblank interval. This means that the vblank timestamps which are based
      on these scanout positions will now update at this earlier start of
      vblank.
      
      2. The driver->get_vblank_counter() function will bump the returned
      vblank count as read from the hw by +1 if the query happens after
      the shifted earlier start of the vblank, but before the real hw increment
      at start of vsync, so the counter appears to increment at start of vblank
      in sync with the timestamp update.
      
      3. Calls from vblank irq-context and regular non-irq calls are now
      treated identical, always simulating the shifted vblank start, to
      avoid inconsistent results for queries happening from vblank irq vs.
      happening from drm_vblank_enable() or vblank_disable_fn().
      
      4. The radeon_flip_work_func will delay mmio programming a pageflip until
      the start of the real vblank iff it happens to execute inside the shifted
      earlier start of the vblank, so pageflips now also appear to execute at
      start of the shifted vblank, in sync with vblank counter and timestamp
      updates. This to avoid some races between updates of vblank count and
      timestamps that are used for swap scheduling and pageflip execution which
      could cause pageflips to execute before the scheduled target vblank.
      
      The lb_vblank_lead_lines "fudge" value is calculated as the size of
      the display controllers line buffer in scanlines for the given video
      mode: Vblank irq's are triggered by the line buffer logic when the line
      buffer refill for a video frame ends, ie. when the line buffer source read
      position enters the hw vblank. This means that a vblank irq could fire at
      most as many scanlines before the current reported scanout position of the
      crtc timing generator as the number of scanlines the line buffer can
      maximally hold for a given video mode.
      
      This patch has been successfully tested on a RV730 card with DCE-3 display
      engine and on a evergreen card with DCE-4 display engine, in single-display
      and dual-display configuration, with different video modes.
      
      A similar patch is needed for amdgpu-kms to fix the same problem.
      
      Limitations:
      
      - Line buffer sizes in pixels are hard-coded on < DCE-4 to a value
        i just guessed to be high enough to work ok, lacking info on the true
        sizes atm.
      
      Fixes: fdo#93147
      Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Michel Dänzer <michel.daenzer@amd.com>
      Cc: Harry Wentland <Harry.Wentland@amd.com>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      
      (v1) Tested-by: Dave Witbrodt <dawitbro@sbcglobal.net>
      
      (v2) Refine radeon_flip_work_func() for better efficiency:
      
           In radeon_flip_work_func, replace the busy waiting udelay(5)
           with event lock held by a more performance and energy efficient
           usleep_range() until at least predicted true start of hw vblank,
           with some slack for scheduler happiness. Release the event lock
           during waits to not delay other outputs in doing their stuff, as
           the waiting can last up to 200 usecs in some cases.
      
           Retested on DCE-3 and DCE-4 to verify it still works nicely.
      
      (v2) Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      c55d21ea
  8. 05 12月, 2015 2 次提交
    • M
      drm/radeon: Fixup hw vblank counter/ts for new drm_update_vblank_count() (v2) · 5b5561b3
      Mario Kleiner 提交于
      commit 4dfd6486 "drm: Use vblank timestamps to guesstimate how many
      vblanks were missed" introduced in Linux 4.4-rc1 makes the drm core
      more fragile to drivers which don't update hw vblank counters and
      vblank timestamps in sync with firing of the vblank irq and
      essentially at leading edge of vblank.
      
      This exposed a problem with radeon-kms/amdgpu-kms which do not
      satisfy above requirements:
      
      The vblank irq fires a few scanlines before start of vblank, but
      programmed pageflips complete at start of vblank and
      vblank timestamps update at start of vblank, whereas the
      hw vblank counter increments only later, at start of vsync.
      
      This leads to problems like off by one errors for vblank counter
      updates, vblank counters apparently going backwards or vblank
      timestamps apparently having time going backwards. The net result
      is stuttering of graphics in games, or little hangs, as well as
      total failure of timing sensitive applications.
      
      See bug #93147 for an example of the regression on Linux 4.4-rc:
      
      https://bugs.freedesktop.org/show_bug.cgi?id=93147
      
      This patch tries to align all above events better from the
      viewpoint of the drm core / of external callers to fix the problem:
      
      1. The apparent start of vblank is shifted a few scanlines earlier,
      so the vblank irq now always happens after start of this extended
      vblank interval and thereby drm_update_vblank_count() always samples
      the updated vblank count and timestamp of the new vblank interval.
      
      To achieve this, the reporting of scanout positions by
      radeon_get_crtc_scanoutpos() now operates as if the vblank starts
      radeon_crtc->lb_vblank_lead_lines before the real start of the hw
      vblank interval. This means that the vblank timestamps which are based
      on these scanout positions will now update at this earlier start of
      vblank.
      
      2. The driver->get_vblank_counter() function will bump the returned
      vblank count as read from the hw by +1 if the query happens after
      the shifted earlier start of the vblank, but before the real hw increment
      at start of vsync, so the counter appears to increment at start of vblank
      in sync with the timestamp update.
      
      3. Calls from vblank irq-context and regular non-irq calls are now
      treated identical, always simulating the shifted vblank start, to
      avoid inconsistent results for queries happening from vblank irq vs.
      happening from drm_vblank_enable() or vblank_disable_fn().
      
      4. The radeon_flip_work_func will delay mmio programming a pageflip until
      the start of the real vblank iff it happens to execute inside the shifted
      earlier start of the vblank, so pageflips now also appear to execute at
      start of the shifted vblank, in sync with vblank counter and timestamp
      updates. This to avoid some races between updates of vblank count and
      timestamps that are used for swap scheduling and pageflip execution which
      could cause pageflips to execute before the scheduled target vblank.
      
      The lb_vblank_lead_lines "fudge" value is calculated as the size of
      the display controllers line buffer in scanlines for the given video
      mode: Vblank irq's are triggered by the line buffer logic when the line
      buffer refill for a video frame ends, ie. when the line buffer source read
      position enters the hw vblank. This means that a vblank irq could fire at
      most as many scanlines before the current reported scanout position of the
      crtc timing generator as the number of scanlines the line buffer can
      maximally hold for a given video mode.
      
      This patch has been successfully tested on a RV730 card with DCE-3 display
      engine and on a evergreen card with DCE-4 display engine, in single-display
      and dual-display configuration, with different video modes.
      
      A similar patch is needed for amdgpu-kms to fix the same problem.
      
      Limitations:
      
      - Line buffer sizes in pixels are hard-coded on < DCE-4 to a value
        i just guessed to be high enough to work ok, lacking info on the true
        sizes atm.
      
      Fixes: fdo#93147
      Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Michel Dänzer <michel.daenzer@amd.com>
      Cc: Harry Wentland <Harry.Wentland@amd.com>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      
      (v1) Tested-by: Dave Witbrodt <dawitbro@sbcglobal.net>
      
      (v2) Refine radeon_flip_work_func() for better efficiency:
      
           In radeon_flip_work_func, replace the busy waiting udelay(5)
           with event lock held by a more performance and energy efficient
           usleep_range() until at least predicted true start of hw vblank,
           with some slack for scheduler happiness. Release the event lock
           during waits to not delay other outputs in doing their stuff, as
           the waiting can last up to 200 usecs in some cases.
      
           Retested on DCE-3 and DCE-4 to verify it still works nicely.
      
      (v2) Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      5b5561b3
    • L
      drm/radeon: Retry DDC probing on DVI on failure if we got an HPD interrupt · cb5d4166
      Lyude 提交于
      HPD signals on DVI ports can be fired off before the pins required for
      DDC probing actually make contact, due to the pins for HPD making
      contact first. This results in a HPD signal being asserted but DDC
      probing failing, resulting in hotplugging occasionally failing.
      
      This is somewhat rare on most cards (depending on what angle you plug
      the DVI connector in), but on some cards it happens constantly. The
      Radeon R5 on the machine used for testing this patch for instance, runs
      into this issue just about every time I try to hotplug a DVI monitor and
      as a result hotplugging almost never works.
      
      Rescheduling the hotplug work for a second when we run into an HPD
      signal with a failing DDC probe usually gives enough time for the rest
      of the connector's pins to make contact, and fixes this issue.
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NLyude <cpaul@redhat.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      cb5d4166
  9. 27 10月, 2015 1 次提交
  10. 09 7月, 2015 1 次提交
    • M
      drm/radeon: Handle irqs only based on irq ring, not irq status regs. · 07f18f0b
      Mario Kleiner 提交于
      Trying to resolve issues with missed vblanks and impossible
      values inside delivered kms pageflip completion events showed
      that radeon's irq handling sometimes doesn't handle valid irqs,
      but silently skips them. This was observed for vblank interrupts.
      
      Although those irqs have corresponding events queued in the gpu's
      irq ring at time of interrupt, and therefore the corresponding
      handling code gets triggered by these events, the handling code
      sometimes silently skipped processing the irq. The reason for those
      skips is that the handling code double-checks for each irq event if
      the corresponding irq status bits in the irq status registers
      are set. Sometimes those bits are not set at time of check
      for valid irqs, maybe due to some hardware race on some setups?
      
      The problem only seems to happen on some machine + card combos
      sometimes, e.g., never happened during my testing of different PC
      cards of the DCE-2/3/4 generation a year ago, but happens consistently
      now on two different Apple Mac cards (RV730, DCE-3, Apple iMac and
      Evergreen JUNIPER, DCE-4 in a Apple MacPro). It also doesn't happen
      at each interrupt but only occassionally every couple of
      hundred or thousand vblank interrupts.
      
      This results in XOrg warning messages like
      
      "[  7084.472] (WW) RADEON(0): radeon_dri2_flip_event_handler:
      Pageflip completion event has impossible msc 420120 < target_msc 420121"
      
      as well as skipped frames and problems for applications that
      use kms pageflip events or vblank events, e.g., users of DRI2 and
      DRI3/Present, Waylands Weston compositor, etc. See also
      
      https://bugs.freedesktop.org/show_bug.cgi?id=85203
      
      After some talking to Alex and Michel, we decided to fix this
      by turning the double-check for asserted irq status bits into a
      warning. Whenever a irq event is queued in the IH ring, always
      execute the corresponding interrupt handler. Still check the irq
      status bits, but only to log a DRM_DEBUG message on a mismatch.
      
      This fixed the problems reliably on both previously failing
      cards, RV-730 dual-head tested on both crtcs (pipes D1 and D2)
      and a triple-output Juniper HD-5770 card tested on all three
      available crtcs (D1/D2/D3). The r600 and evergreen irq handling
      is therefore tested, but the cik an si handling is only compile
      tested due to lack of hw.
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com>
      CC: Michel Dänzer <michel.daenzer@amd.com>
      CC: Alex Deucher <alexander.deucher@amd.com>
      CC: <stable@vger.kernel.org> # v3.16+
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      07f18f0b
  11. 29 5月, 2015 1 次提交
    • D
      radeon: Deinline indirect register accessor functions · 9e5acbc2
      Denys Vlasenko 提交于
      This patch deinlines indirect register accessor functions.
      
      These functions perform two mmio accesses, framed by spin lock/unlock.
      Spin lock/unlock by itself takes more than 50 cycles in ideal case
      (if lock is exclusively cached on current CPU).
      
      With this .config: http://busybox.net/~vda/kernel_config,
      after uninlining these functions have sizes and callsite counts
      as follows:
      
      r600_uvd_ctx_rreg: 111 bytes, 4 callsites
      r600_uvd_ctx_wreg: 113 bytes, 5 callsites
      eg_pif_phy0_rreg: 106 bytes, 13 callsites
      eg_pif_phy0_wreg: 108 bytes, 13 callsites
      eg_pif_phy1_rreg: 107 bytes, 13 callsites
      eg_pif_phy1_wreg: 108 bytes, 13 callsites
      rv370_pcie_rreg: 111 bytes, 21 callsites
      rv370_pcie_wreg: 113 bytes, 24 callsites
      r600_rcu_rreg: 111 bytes, 16 callsites
      r600_rcu_wreg: 113 bytes, 25 callsites
      cik_didt_rreg: 106 bytes, 10 callsites
      cik_didt_wreg: 107 bytes, 10 callsites
      tn_smc_rreg: 106 bytes, 126 callsites
      tn_smc_wreg: 107 bytes, 116 callsites
      eg_cg_rreg: 107 bytes, 20 callsites
      eg_cg_wreg: 108 bytes, 52 callsites
      
      Functions r100_mm_rreg() and r100_mm_rreg() have a fast path and
      a locked (slow) path. This patch deinlines only slow path.
      
      r100_mm_rreg_slow: 78 bytes, 2083 callsites
      r100_mm_wreg_slow: 81 bytes, 3570 callsites
      
      Reduction in code size is more than 65,000 bytes:
      
          text     data      bss       dec     hex filename
      85740176 22294680 20627456 128662312 7ab3b28 vmlinux.before
      85674192 22294776 20627456 128598664 7aa4288 vmlinux
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      9e5acbc2
  12. 28 5月, 2015 1 次提交
  13. 12 5月, 2015 1 次提交
  14. 20 3月, 2015 2 次提交
  15. 04 3月, 2015 1 次提交
  16. 26 2月, 2015 1 次提交
  17. 22 1月, 2015 2 次提交
  18. 07 11月, 2014 2 次提交
  19. 30 10月, 2014 1 次提交
  20. 03 10月, 2014 1 次提交
  21. 01 10月, 2014 1 次提交
  22. 23 9月, 2014 3 次提交
  23. 19 8月, 2014 1 次提交
  24. 05 8月, 2014 4 次提交
  25. 23 7月, 2014 1 次提交
  26. 17 7月, 2014 1 次提交
    • M
      drm/radeon: Prevent too early kms-pageflips triggered by vblank. · f53f81b2
      Mario Kleiner 提交于
      Since 3.16-rc1 we have this new failure:
      
      When the userspace XOrg ddx schedules vblank events to
      trigger deferred kms-pageflips, e.g., via the OML_sync_control
      extension call glXSwapBuffersMscOML(), or if a glXSwapBuffers()
      is called immediately after completion of a previous swapbuffers
      call, e.g., in a tight rendering loop with minimal rendering,
      it happens frequently that the pageflip ioctl() is executed
      within the same vblank in which a previous kms-pageflip completed,
      or - for deferred swaps - always one vblank earlier than requested
      by the client app.
      
      This causes premature pageflips and detection of failure by
      the ddx, e.g., XOrg log warnings like...
      
      "(WW) RADEON(1): radeon_dri2_flip_event_handler: Pageflip
      completion event has impossible msc 201025 < target_msc 201026"
      
      ... and error/invalid return values of glXWaitForSbcOML() and
      Intel_swap_events extension.
      
      Reason is the new way in which kms-pageflips are programmed
      since 3.16.
      
      This commit changes the time window in which the hw can
      execute pending programmed pageflips. Before, a pending flip
      would get executed anywhere within the vblank interval. Now
      a pending flip only gets executed at the leading edge of
      vblank (start of front porch), making sure that a invocation
      of the pageflip ioctl() within a given vblank interval will
      only lead to pageflip completion in the following vblank.
      
      Tested to death on a DCE-4 card.
      Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      f53f81b2
  27. 11 7月, 2014 2 次提交
  28. 10 6月, 2014 1 次提交