1. 04 2月, 2011 1 次提交
  2. 02 2月, 2011 1 次提交
  3. 07 1月, 2011 3 次提交
  4. 06 1月, 2011 1 次提交
    • T
      drm/radeon: use system_wq instead of dev_priv->wq · 32c87fca
      Tejun Heo 提交于
      With cmwq, there's no reason for radeon to use a dedicated workqueue.
      Drop dev_priv->wq and use system_wq instead.
      
      Because radeon_driver_irq_uninstall_kms() may be called from
      unsleepable context, the work items can't be flushed from there.
      Instead, init and flush from radeon_irq_kms_init/fini().
      
      While at it, simplify canceling/flushing of rdev->pm.dynpm_idle_work.
      Always initialize and sync cancel instead of being unnecessarily smart
      about it.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NAlex Deucher <alexdeucher@gmail.com>
      Cc: dri-devel@lists.freedesktop.org
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      32c87fca
  5. 23 11月, 2010 1 次提交
  6. 22 11月, 2010 1 次提交
    • M
      drm/kms/radeon: Add support for precise vblank timestamping. · f5a80209
      Mario Kleiner 提交于
      This patch adds new functions for use by the drm core:
      
      .get_vblank_timestamp() provides a precise timestamp
      for the end of the most recent (or current) vblank
      interval of a given crtc, as needed for the DRI2
      implementation of the OML_sync_control extension.
      
      It is a thin wrapper around the drm function
      drm_calc_vbltimestamp_from_scanoutpos() which does
      almost all the work and is shared across drivers.
      
      .get_scanout_position() provides the current horizontal
      and vertical video scanout position and "in vblank"
      status of a given crtc, as needed by the drm for use by
      drm_calc_vbltimestamp_from_scanoutpos().
      
      The function is also used by the dynamic gpu reclocking
      code to determine when it is safe to reclock inside vblank.
      
      For that purpose radeon_pm_in_vbl() is modified to
      accomodate a small change in the function prototype of
      the radeon_get_crtc_scanoutpos() which is hooked up to
      .get_scanout_position().
      
      This code has been tested on AVIVO hardware, a RV530
      (ATI Mobility Radeon X1600) in a Intel Core-2 Duo MacBookPro
      and some R600 variant (FireGL V7600) in a single cpu
      AMD Athlon 64 PC.
      Signed-off-by: NMario Kleiner <mario.kleiner@tuebingen.mpg.de>
      Reviewed-by: NAlex Deucher <alexdeucher@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      f5a80209
  7. 06 10月, 2010 1 次提交
  8. 27 8月, 2010 1 次提交
  9. 20 8月, 2010 1 次提交
  10. 10 8月, 2010 1 次提交
  11. 02 8月, 2010 2 次提交
  12. 29 7月, 2010 1 次提交
  13. 01 7月, 2010 1 次提交
    • R
      DRM / radeon / KMS: Fix hibernation regression related to radeon PM (was: Re:... · 3f53eb6f
      Rafael J. Wysocki 提交于
      DRM / radeon / KMS: Fix hibernation regression related to radeon PM (was: Re: [Regression, post-2.6.34] Hibernation broken on machines with radeon/KMS and r300)
      
      There is a regression from 2.6.34 related to the recent radeon power
      management changes, caused by attempting to cancel a delayed work
      item that's never been scheduled.  However, the code as is has some
      other issues potentially leading to visible problems.
      
      First, the mutex around cancel_delayed_work() in radeon_pm_suspend()
      doesn't really serve any purpose, because cancel_delayed_work() only
      tries to delete the work's timer.  Moreover, it doesn't prevent the
      work handler from running, so the handler can do some wrong things if
      it wins the race and in that case it will rearm itself to do some
      more wrong things going forward.  So, I think it's better to wait for
      the handler to return in case it's already been queued up for
      execution.  Also, it should be prevented from rearming itself in that
      case.
      
      Second, in radeon_set_pm_method() the cancel_delayed_work() is not
      sufficient to prevent the work handler from running and queing up
      itself for the next run (the failure scenario is that
      cancel_delayed_work() returns 0, so the handler is run, it waits on
      the mutex and then rearms itself after the mutex has been released),
      so again the work handler should be prevented from rearming itself in
      that case..
      
      Finally, there's a potential deadlock in radeon_pm_fini(), because
      cancel_delayed_work_sync() is called under rdev->pm.mutex, but the
      work handler tries to acquire the same mutex (if it wins the race).
      
      Fix the issues described above.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Reviewed-by: NAlex Deucher <alexdeucher@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      3f53eb6f
  14. 08 6月, 2010 6 次提交
  15. 03 6月, 2010 1 次提交
  16. 18 5月, 2010 17 次提交