1. 02 8月, 2010 2 次提交
  2. 01 7月, 2010 2 次提交
    • R
      DRM / radeon / KMS: Fix hibernation regression related to radeon PM (was: Re:... · 3f53eb6f
      Rafael J. Wysocki 提交于
      DRM / radeon / KMS: Fix hibernation regression related to radeon PM (was: Re: [Regression, post-2.6.34] Hibernation broken on machines with radeon/KMS and r300)
      
      There is a regression from 2.6.34 related to the recent radeon power
      management changes, caused by attempting to cancel a delayed work
      item that's never been scheduled.  However, the code as is has some
      other issues potentially leading to visible problems.
      
      First, the mutex around cancel_delayed_work() in radeon_pm_suspend()
      doesn't really serve any purpose, because cancel_delayed_work() only
      tries to delete the work's timer.  Moreover, it doesn't prevent the
      work handler from running, so the handler can do some wrong things if
      it wins the race and in that case it will rearm itself to do some
      more wrong things going forward.  So, I think it's better to wait for
      the handler to return in case it's already been queued up for
      execution.  Also, it should be prevented from rearming itself in that
      case.
      
      Second, in radeon_set_pm_method() the cancel_delayed_work() is not
      sufficient to prevent the work handler from running and queing up
      itself for the next run (the failure scenario is that
      cancel_delayed_work() returns 0, so the handler is run, it waits on
      the mutex and then rearms itself after the mutex has been released),
      so again the work handler should be prevented from rearming itself in
      that case..
      
      Finally, there's a potential deadlock in radeon_pm_fini(), because
      cancel_delayed_work_sync() is called under rdev->pm.mutex, but the
      work handler tries to acquire the same mutex (if it wins the race).
      
      Fix the issues described above.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Reviewed-by: NAlex Deucher <alexdeucher@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      3f53eb6f
    • A
      drm/radeon/kms/igp: fix possible divide by 0 in bandwidth code (v2) · f892034a
      Alex Deucher 提交于
      Some IGP systems specify the system memory clock in the Firmware
      table rather than the IGP info table.  Check both and make sure
      we have a value system memory clock value.
      
      v2: make sure rs690_pm_info is called on rs780/rs880 as well.
      
      fixes a regression since 07d4190327b02ab3aaad25a2d168f79d92e8f8c2.
      Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: NAlex Deucher <alexdeucher@gmail.com>
      Tested-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      f892034a
  3. 08 6月, 2010 2 次提交
  4. 03 6月, 2010 1 次提交
  5. 24 5月, 2010 1 次提交
  6. 21 5月, 2010 1 次提交
  7. 18 5月, 2010 13 次提交
  8. 23 4月, 2010 2 次提交
  9. 09 4月, 2010 4 次提交
  10. 07 4月, 2010 1 次提交
    • D
      drm/fb: fix fbdev object model + cleanup properly. · 38651674
      Dave Airlie 提交于
      The fbdev layer in the kms code should act like a consumer of the kms services and avoid having relying on information being store in the kms core structures in order for it to work.
      
      This patch
      
      a) removes the info pointer/psuedo palette from the core drm_framebuffer structure and moves it to the fbdev helper layer, it also removes the core drm keeping a list of kernel kms fbdevs.
      b) migrated all the fb helper functions out of the crtc helper file into the fb helper file.
      c) pushed the fb probing/hotplug control into the driver
      d) makes the surface sizes into a structure for ease of passing
      This changes the intel/radeon/nouveau drivers to use the new helper.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      38651674
  11. 06 4月, 2010 3 次提交
    • J
      drm/radeon/kms: simplify & improve GPU reset V2 · 90aca4d2
      Jerome Glisse 提交于
      This simplify and improve GPU reset for R1XX-R6XX hw, it's
      not 100% reliable here are result:
      - R1XX/R2XX works bunch of time in a row, sometimes it
        seems it can work indifinitly
      - R3XX/R3XX the most unreliable one, sometimes you will be
        able to reset few times, sometimes not even once
      - R5XX more reliable than previous hw, seems to work most
        of the times but once in a while it fails for no obvious
        reasons (same status than previous reset just no same
        happy ending)
      - R6XX/R7XX are lot more reliable with this patch, still
        it seems that it can fail after a bunch (reset every
        2sec for 3hour bring down the GPU & computer)
      
      This have been tested on various hw, for some odd reasons
      i wasn't able to lockup RS480/RS690 (while they use to
      love locking up).
      
      Note that on R1XX-R5XX the cursor will disapear after
      lockup haven't checked why, switch to console and back
      to X will restore cursor.
      
      Next step is to record the bogus command that leaded to
      the lockup.
      
      V2 Fix r6xx resume path to avoid reinitializing blit
      module, use the gpu_lockup boolean to avoid entering
      inifinite waiting loop on fence while reiniting the GPU
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      90aca4d2
    • J
      drm/radeon/kms: rename gpu_reset to asic_reset · a2d07b74
      Jerome Glisse 提交于
      Patch rename gpu_reset to asic_reset in prevision of having
      gpu_reset doing more stuff than just basic asic reset.
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a2d07b74
    • J
      drm/radeon/kms: fence cleanup + more reliable GPU lockup detection V4 · 225758d8
      Jerome Glisse 提交于
      This patch cleanup the fence code, it drops the timeout field of
      fence as the time to complete each IB is unpredictable and shouldn't
      be bound.
      
      The fence cleanup lead to GPU lockup detection improvement, this
      patch introduce a callback, allowing to do asic specific test for
      lockup detection. In this patch the CP is use as a first indicator
      of GPU lockup. If CP doesn't make progress during 1second we assume
      we are facing a GPU lockup.
      
      To avoid overhead of testing GPU lockup frequently due to fence
      taking time to be signaled we query the lockup callback every
      500msec. There is plenty code comment explaining the design & choise
      inside the code.
      
      This have been tested mostly on R3XX/R5XX hw, in normal running
      destkop (compiz firefox, quake3 running) the lockup callback wasn't
      call once (1 hour session). Also tested with forcing GPU lockup and
      lockup was reported after the 1s CP activity timeout.
      
      V2 switch to 500ms timeout so GPU lockup get call at least 2 times
         in less than 2sec.
      V3 store last jiffies in fence struct so on ERESTART, EBUSY we keep
         track of how long we already wait for a given fence
      V4 make sure we got up to date cp read pointer so we don't have
         false positive
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      225758d8
  12. 31 3月, 2010 4 次提交
  13. 15 3月, 2010 4 次提交