1. 01 7月, 2010 2 次提交
  2. 15 6月, 2010 1 次提交
    • D
      radeon/kms: fix powerpc/rn50 untiled behaviour. · f5c5f040
      Dave Airlie 提交于
      Installing 2.6.34 on a Power5/rn50 combo machine, X showed buggy sw rendering,
      enabling tiling in the DDX fixed it. Investigation showed that a further /16
      was needed in the untiled case on this chipset. Need further investigations
      on what other chips this could affect, possibly rv100->rv280.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      f5c5f040
  3. 08 6月, 2010 1 次提交
    • A
      drm/radeon/kms/pm: add mid profile · c9e75b21
      Alex Deucher 提交于
      This adds an additional profile, mid, to the pm profile
      code which takes the place of the old low profile.  The default
      behavior remains the same, e.g., auto profile now selects between
      mid and high profiles based on power source, however, you can now
      manually force the low profile which was previously only available
      as a dpms off state.  Enabling the low profile when the displays
      are on has been known to cause display corruption in some cases.
      Signed-off-by: NAlex Deucher <alexdeucher@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      c9e75b21
  4. 18 5月, 2010 15 次提交
  5. 28 4月, 2010 1 次提交
  6. 20 4月, 2010 1 次提交
  7. 19 4月, 2010 1 次提交
  8. 12 4月, 2010 1 次提交
  9. 06 4月, 2010 3 次提交
    • J
      drm/radeon/kms: simplify & improve GPU reset V2 · 90aca4d2
      Jerome Glisse 提交于
      This simplify and improve GPU reset for R1XX-R6XX hw, it's
      not 100% reliable here are result:
      - R1XX/R2XX works bunch of time in a row, sometimes it
        seems it can work indifinitly
      - R3XX/R3XX the most unreliable one, sometimes you will be
        able to reset few times, sometimes not even once
      - R5XX more reliable than previous hw, seems to work most
        of the times but once in a while it fails for no obvious
        reasons (same status than previous reset just no same
        happy ending)
      - R6XX/R7XX are lot more reliable with this patch, still
        it seems that it can fail after a bunch (reset every
        2sec for 3hour bring down the GPU & computer)
      
      This have been tested on various hw, for some odd reasons
      i wasn't able to lockup RS480/RS690 (while they use to
      love locking up).
      
      Note that on R1XX-R5XX the cursor will disapear after
      lockup haven't checked why, switch to console and back
      to X will restore cursor.
      
      Next step is to record the bogus command that leaded to
      the lockup.
      
      V2 Fix r6xx resume path to avoid reinitializing blit
      module, use the gpu_lockup boolean to avoid entering
      inifinite waiting loop on fence while reiniting the GPU
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      90aca4d2
    • J
      drm/radeon/kms: rename gpu_reset to asic_reset · a2d07b74
      Jerome Glisse 提交于
      Patch rename gpu_reset to asic_reset in prevision of having
      gpu_reset doing more stuff than just basic asic reset.
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a2d07b74
    • J
      drm/radeon/kms: fence cleanup + more reliable GPU lockup detection V4 · 225758d8
      Jerome Glisse 提交于
      This patch cleanup the fence code, it drops the timeout field of
      fence as the time to complete each IB is unpredictable and shouldn't
      be bound.
      
      The fence cleanup lead to GPU lockup detection improvement, this
      patch introduce a callback, allowing to do asic specific test for
      lockup detection. In this patch the CP is use as a first indicator
      of GPU lockup. If CP doesn't make progress during 1second we assume
      we are facing a GPU lockup.
      
      To avoid overhead of testing GPU lockup frequently due to fence
      taking time to be signaled we query the lockup callback every
      500msec. There is plenty code comment explaining the design & choise
      inside the code.
      
      This have been tested mostly on R3XX/R5XX hw, in normal running
      destkop (compiz firefox, quake3 running) the lockup callback wasn't
      call once (1 hour session). Also tested with forcing GPU lockup and
      lockup was reported after the 1s CP activity timeout.
      
      V2 switch to 500ms timeout so GPU lockup get call at least 2 times
         in less than 2sec.
      V3 store last jiffies in fence struct so on ERESTART, EBUSY we keep
         track of how long we already wait for a given fence
      V4 make sure we got up to date cp read pointer so we don't have
         false positive
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      225758d8
  10. 01 4月, 2010 2 次提交
  11. 31 3月, 2010 4 次提交
  12. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  13. 15 3月, 2010 2 次提交
  14. 25 2月, 2010 1 次提交
  15. 18 2月, 2010 1 次提交
    • J
      drm/radeon/kms: simplify memory controller setup V2 · d594e46a
      Jerome Glisse 提交于
      Get rid of _location and use _start/_end also simplify the
      computation of vram_start|end & gtt_start|end. For R1XX-R2XX
      we place VRAM at the same address of PCI aperture, those GPU
      shouldn't have much memory and seems to behave better when
      setup that way. For R3XX and newer we place VRAM at 0. For
      R6XX-R7XX AGP we place VRAM before or after AGP aperture this
      might limit to limit the VRAM size but it's very unlikely.
      For IGP we don't change the VRAM placement.
      
      Tested on (compiz,quake3,suspend/resume):
      PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
      AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
      IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
      
      RPB: resume previously broken
      
      V2 correct commit message to reflect more accurately the bug
      and move VRAM placement to 0 for most of the GPU to avoid
      limiting VRAM.
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      d594e46a
  16. 11 2月, 2010 1 次提交
  17. 09 2月, 2010 2 次提交