1. 10 5月, 2012 5 次提交
    • J
      drm/radeon: simplify semaphore handling v2 · a8c05940
      Jerome Glisse 提交于
      Directly use the suballocator to get small chunks of memory.
      It's equally fast and doesn't crash when we encounter a GPU reset.
      
      v2: rebased on new SA interface.
      Signed-off-by: NChristian König <deathsimple@vodafone.de>
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a8c05940
    • J
      drm/radeon: use one wait queue for all rings add fence_wait_any v2 · 0085c950
      Jerome Glisse 提交于
      Use one wait queue for all rings. When one ring progress, other
      likely does to and we are not expecting to have a lot of waiter
      anyway.
      
      Also add a fence_wait_any that will wait until the first fence
      in the fence array (one fence per ring) is signaled. This allow
      to wait on all rings.
      
      v2: some minor cleanups and improvements.
      Signed-off-by: NChristian König <deathsimple@vodafone.de>
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      0085c950
    • C
      drm/radeon: rework locking ring emission mutex in fence deadlock detection v2 · 8a47cc9e
      Christian König 提交于
      Some callers illegal called fence_wait_next/empty
      while holding the ring emission mutex. So don't
      relock the mutex in that cases, and move the actual
      locking into the fence code.
      
      v2: Don't try to unlock the mutex if it isn't locked.
      Signed-off-by: NChristian König <deathsimple@vodafone.de>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      8a47cc9e
    • J
      drm/radeon: rework fence handling, drop fence list v7 · 3b7a2b24
      Jerome Glisse 提交于
      Using 64bits fence sequence we can directly compare sequence
      number to know if a fence is signaled or not. Thus the fence
      list became useless, so does the fence lock that mainly
      protected the fence list.
      
      Things like ring.ready are no longer behind a lock, this should
      be ok as ring.ready is initialized once and will only change
      when facing lockup. Worst case is that we return an -EBUSY just
      after a successfull GPU reset, or we go into wait state instead
      of returning -EBUSY (thus delaying reporting -EBUSY to fence
      wait caller).
      
      v2: Remove left over comment, force using writeback on cayman and
          newer, thus not having to suffer from possibly scratch reg
          exhaustion
      v3: Rebase on top of change to uint64 fence patch
      v4: Change DCE5 test to force write back on cayman and newer but
          also any APU such as PALM or SUMO family
      v5: Rebase on top of new uint64 fence patch
      v6: Just break if seq doesn't change any more. Use radeon_fence
          prefix for all function names. Even if it's now highly optimized,
          try avoiding polling to often.
      v7: We should never poll the last_seq from the hardware without
          waking the sleeping threads, otherwise we might lose events.
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NChristian König <deathsimple@vodafone.de>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      3b7a2b24
    • J
      drm/radeon: convert fence to uint64_t v4 · bb635567
      Jerome Glisse 提交于
      This convert fence to use uint64_t sequence number intention is
      to use the fact that uin64_t is big enough that we don't need to
      care about wrap around.
      
      Tested with and without writeback using 0xFFFFF000 as initial
      fence sequence and thus allowing to test the wrap around from
      32bits to 64bits.
      
      v2: Add comment about possible race btw CPU & GPU, add comment
          stressing that we need 2 dword aligned for R600_WB_EVENT_OFFSET
          Read fence sequenc in reverse order of GPU write them so we
          mitigate the race btw CPU and GPU.
      
      v3: Drop the need for ring to emit the 64bits fence, and just have
          each ring emit the lower 32bits of the fence sequence. We
          handle the wrap over 32bits in fence_process.
      
      v4: Just a small optimization: Don't reread the last_seq value
          if loop restarts, since we already know its value anyway.
          Also start at zero not one for seq value and use pre instead
          of post increment in emmit, otherwise wait_empty will deadlock.
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NChristian König <deathsimple@vodafone.de>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      bb635567
  2. 03 5月, 2012 7 次提交
  3. 14 2月, 2012 1 次提交
  4. 06 1月, 2012 1 次提交
  5. 21 12月, 2011 9 次提交
  6. 22 10月, 2011 1 次提交
  7. 15 9月, 2011 1 次提交
  8. 27 7月, 2011 1 次提交
  9. 17 6月, 2011 1 次提交
  10. 10 4月, 2011 1 次提交
  11. 09 4月, 2011 1 次提交
  12. 17 3月, 2011 1 次提交
  13. 16 12月, 2010 1 次提交
  14. 09 11月, 2010 1 次提交
  15. 06 10月, 2010 2 次提交
    • A
      drm/radeon/kms/r6xx+: use new style fencing (v3) · d0f8a854
      Alex Deucher 提交于
      On r6xx+ a newer fence mechanism was implemented to replace
      the old wait_until plus scratch regs setup.  A single EOP event
      will flush the destination caches, write a fence value, and generate
      an interrupt.  This is the recommended fence mechanism on r6xx+ asics.
      
      This requires my previous writeback patch.
      
      v2: fix typo that enabled event fence checking on all asics
      rather than just r6xx+.
      
      v3: properly enable EOP interrupts
      Should fix:
      https://bugs.freedesktop.org/show_bug.cgi?id=29972Signed-off-by: NAlex Deucher <alexdeucher@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      d0f8a854
    • A
      drm/radeon/kms: enable writeback (v2) · 724c80e1
      Alex Deucher 提交于
      When writeback is enabled, the GPU shadows writes to certain
      registers into a buffer in memory.  The driver can then read
      the values from the shadow rather than reading back from the
      register across the bus.  Writeback can be disabled by setting
      the no_wb module param to 1.
      
      On r6xx/r7xx/evergreen, the following registers are shadowed:
      - CP scratch registers
      - CP read pointer
      - IH write pointer
      On r1xx-rr5xx, the following registers are shadowed:
      - CP scratch registers
      - CP read pointer
      
      v2:
      - Combine wb patches for r6xx-evergreen and r1xx-r5xx
      - Writeback is disabled on AGP boards since it tends to be
      unreliable on AGP using the gart.
      - Check radeon_wb_init return values properly.
      Signed-off-by: NAlex Deucher <alexdeucher@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      724c80e1
  16. 27 4月, 2010 1 次提交
    • J
      drm/radeon/kms: R3XX-R4XX fix GPU reset code · a1e9ada3
      Jerome Glisse 提交于
      Previous reset code leaded to computer hard lockup (need to unplug
      the power too reboot the computer) on various configuration. This
      patch change the reset code to avoid hard lockup. The GPU reset
      is failing most of the time but at least user can log in remotely
      or properly shutdown the computer.
      
      Two issues were leading to hard lockup :
      - Writting to the scratch register lead to hard lockup most likely
      because the write back mecanism is in fuzy state after GPU lockup.
      - Resetting the GPU memory controller and not reinitializing it
      after leaded to hard lockup. We did only reinitialize in case of
      successfull reset thus unsuccessfull reset quickly leaded to hard
      lockup.
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a1e9ada3
  17. 06 4月, 2010 3 次提交
    • J
      drm/radeon/kms: simplify & improve GPU reset V2 · 90aca4d2
      Jerome Glisse 提交于
      This simplify and improve GPU reset for R1XX-R6XX hw, it's
      not 100% reliable here are result:
      - R1XX/R2XX works bunch of time in a row, sometimes it
        seems it can work indifinitly
      - R3XX/R3XX the most unreliable one, sometimes you will be
        able to reset few times, sometimes not even once
      - R5XX more reliable than previous hw, seems to work most
        of the times but once in a while it fails for no obvious
        reasons (same status than previous reset just no same
        happy ending)
      - R6XX/R7XX are lot more reliable with this patch, still
        it seems that it can fail after a bunch (reset every
        2sec for 3hour bring down the GPU & computer)
      
      This have been tested on various hw, for some odd reasons
      i wasn't able to lockup RS480/RS690 (while they use to
      love locking up).
      
      Note that on R1XX-R5XX the cursor will disapear after
      lockup haven't checked why, switch to console and back
      to X will restore cursor.
      
      Next step is to record the bogus command that leaded to
      the lockup.
      
      V2 Fix r6xx resume path to avoid reinitializing blit
      module, use the gpu_lockup boolean to avoid entering
      inifinite waiting loop on fence while reiniting the GPU
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      90aca4d2
    • J
      drm/radeon/kms: rename gpu_reset to asic_reset · a2d07b74
      Jerome Glisse 提交于
      Patch rename gpu_reset to asic_reset in prevision of having
      gpu_reset doing more stuff than just basic asic reset.
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a2d07b74
    • J
      drm/radeon/kms: fence cleanup + more reliable GPU lockup detection V4 · 225758d8
      Jerome Glisse 提交于
      This patch cleanup the fence code, it drops the timeout field of
      fence as the time to complete each IB is unpredictable and shouldn't
      be bound.
      
      The fence cleanup lead to GPU lockup detection improvement, this
      patch introduce a callback, allowing to do asic specific test for
      lockup detection. In this patch the CP is use as a first indicator
      of GPU lockup. If CP doesn't make progress during 1second we assume
      we are facing a GPU lockup.
      
      To avoid overhead of testing GPU lockup frequently due to fence
      taking time to be signaled we query the lockup callback every
      500msec. There is plenty code comment explaining the design & choise
      inside the code.
      
      This have been tested mostly on R3XX/R5XX hw, in normal running
      destkop (compiz firefox, quake3 running) the lockup callback wasn't
      call once (1 hour session). Also tested with forcing GPU lockup and
      lockup was reported after the 1s CP activity timeout.
      
      V2 switch to 500ms timeout so GPU lockup get call at least 2 times
         in less than 2sec.
      V3 store last jiffies in fence struct so on ERESTART, EBUSY we keep
         track of how long we already wait for a given fence
      V4 make sure we got up to date cp read pointer so we don't have
         false positive
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      225758d8
  18. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  19. 07 1月, 2010 1 次提交