1. 14 3月, 2013 1 次提交
  2. 04 3月, 2013 1 次提交
  3. 18 1月, 2013 6 次提交
  4. 16 1月, 2013 1 次提交
    • C
      drm/i915: Invalidate the relocation presumed_offsets along the slow path · 262b6d36
      Chris Wilson 提交于
      In the slow path, we are forced to copy the relocations prior to
      acquiring the struct mutex in order to handle pagefaults. We forgo
      copying the new offsets back into the relocation entries in order to
      prevent a recursive locking bug should we trigger a pagefault whilst
      holding the mutex for the reservations of the execbuffer. Therefore, we
      need to reset the presumed_offsets just in case the objects are rebound
      back into their old locations after relocating for this exexbuffer - if
      that were to happen we would assume the relocations were valid and leave
      the actual pointers to the kernels dangling, instant hang.
      
      Fixes regression from commit bcf50e27
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Sun Nov 21 22:07:12 2010 +0000
      
          drm/i915: Handle pagefaults in execbuffer user relocations
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=55984Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@fwll.ch>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      262b6d36
  5. 18 12月, 2012 1 次提交
    • D
      drm/i915: Implement workaround for broken CS tlb on i830/845 · b45305fc
      Daniel Vetter 提交于
      Now that Chris Wilson demonstrated that the key for stability on early
      gen 2 is to simple _never_ exchange the physical backing storage of
      batch buffers I've tried a stab at a kernel solution. Doesn't look too
      nefarious imho, now that I don't try to be too clever for my own good
      any more.
      
      v2: After discussing the various techniques, we've decided to always blit
      batches on the suspect devices, but allow userspace to opt out of the
      kernel workaround assume full responsibility for providing coherent
      batches. The principal reason is that avoiding the blit does improve
      performance in a few key microbenchmarks and also in cairo-trace
      replays.
      Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet:
      - Drop the hunk which uses HAS_BROKEN_CS_TLB to implement the ring
        wrap w/a. Suggested by Chris Wilson.
      - Also add the ACTHD check from Chris Wilson for the error state
        dumping, so that we still catch batches when userspace opts out of
        the w/a.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b45305fc
  6. 04 12月, 2012 1 次提交
  7. 29 11月, 2012 2 次提交
    • V
      drm/i915: Kill i915_gem_execbuffer_wait_for_flips() · ca9c46c5
      Ville Syrjälä 提交于
      As per Chris Wilson's suggestion make
      i915_gem_execbuffer_wait_for_flips() go away.
      
      This was used to stall the GPU ring while there are pending
      page flips involving the relevant BO. Ie. while the BO is still
      being scanned out by the display controller.
      
      The recommended alternative is to use the page flip events to
      wait for the page flips to fully complete before reusing the BO
      of the old front buffer. Or use more buffers.
      Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NKristian Høgsberg <krh@bitplanet.net>
      Acked-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      [danvet: don't remove obj->pending_flips, still required due to
      reorder patches.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      ca9c46c5
    • C
      drm/i915: Preallocate next seqno before touching the ring · 9d773091
      Chris Wilson 提交于
      Based on the work by Mika Kuoppala, we realised that we need to handle
      seqno wraparound prior to committing our changes to the ring. The most
      obvious point then is to grab the seqno inside intel_ring_begin(), and
      then to reuse that seqno for all ring operations until the next request.
      As intel_ring_begin() can fail, the callers must already be prepared to
      handle such failure and so we can safely add further checks.
      
      This patch looks like it should be split up into the interface
      changes and the tweaks to move seqno wrapping from the execbuffer into
      the core seqno increment. However, I found no easy way to break it into
      incremental steps without introducing further broken behaviour.
      
      v2: Mika found a silly mistake and a subtle error in the existing code;
      inside i915_gem_retire_requests() we were resetting the sync_seqno of
      the target ring based on the seqno from this ring - which are only
      related by the order of their allocation, not retirement. Hence we were
      applying the optimisation that the rings were synchronised too early,
      fortunately the only real casualty there is the handling of seqno
      wrapping.
      
      v3: Do not forget to reset the sync_seqno upon module reinitialisation,
      ala resume.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=863861
      Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> [v2]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      9d773091
  8. 22 11月, 2012 1 次提交
    • C
      drm/i915: Remove bogus test for a present execbuffer · be7cb634
      Chris Wilson 提交于
      The intention of checking obj->gtt_offset!=0 is to verify that the
      target object was listed in the execbuffer and had been bound into the
      GTT. This is guarranteed by the earlier rearrangement to split the
      execbuffer operation into reserve and relocation phases and then
      verified by the check that the target handle had been processed during
      the reservation phase.
      
      However, the actual checking of obj->gtt_offset==0 is bogus as we can
      indeed reference an object at offset 0. For instance, the framebuffer
      installed by the BIOS often resides at offset 0 - causing EINVAL as we
      legimately try to render using the stolen fb.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NEric Anholt <eric@anholt.net>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      be7cb634
  9. 12 11月, 2012 1 次提交
    • B
      drm/i915: Stop using AGP layer for GEN6+ · e76e9aeb
      Ben Widawsky 提交于
      As a quick hack we make the old intel_gtt structure mutable so we can
      fool a bunch of the existing code which depends on elements in that data
      structure. We can/should try to remove this in a subsequent patch.
      
      This should preserve the old gtt init behavior which upon writing these
      patches seems incorrect. The next patch will fix these things.
      
      The one exception is VLV which doesn't have the preserved flush control
      write behavior. Since we want to do that for all GEN6+ stuff, we'll
      handle that in a later patch. Mainstream VLV support doesn't actually
      exist yet anyway.
      
      v2: Update the comment to remove the "voodoo"
      Check that the last pte written matches what we readback
      
      v3: actually kill cache_level_to_agp_type since most of the flags will
      disappear in an upcoming patch
      
      v4: v3 was actually not what we wanted (Daniel)
      Make the ggtt bind assertions better and stricter (Chris)
      Fix some uncaught errors at gtt init (Chris)
      Some other random stuff that Chris wanted
      
      v5: check for i==0 in gen6_ggtt_bind_object to shut up gcc (Ben)
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Reviewed-by [v4]: Chris Wilson <chris@chris-wilson.co.uk>
      [danvet: Make the cache_level -> agp_flags conversion for pre-gen6 a
      tad more robust by mapping everything != CACHE_NONE to the cached agp
      flag - we have a 1:1 uncached mapping, but different modes of
      cacheable (at least on later generations). Suggested by Chris Wilson.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      e76e9aeb
  10. 18 10月, 2012 1 次提交
    • C
      drm/i915: Allow DRM_ROOT_ONLY|DRM_MASTER to submit privileged batchbuffers · d7d4eedd
      Chris Wilson 提交于
      With the introduction of per-process GTT space, the hardware designers
      thought it wise to also limit the ability to write to MMIO space to only
      a "secure" batch buffer. The ability to rewrite registers is the only
      way to program the hardware to perform certain operations like scanline
      waits (required for tear-free windowed updates). So we either have a
      choice of adding an interface to perform those synchronized updates
      inside the kernel, or we permit certain processes the ability to write
      to the "safe" registers from within its command stream. This patch
      exposes the ability to submit a SECURE batch buffer to
      DRM_ROOT_ONLY|DRM_MASTER processes.
      
      v2: Haswell split up bit8 into a ppgtt bit (still bit8) and a security
      bit (bit 13, accidentally not set). Also add a comment explaining why
      secure batches need a global gtt binding.
      
      Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
      [danvet: added hsw fixup.]
      Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      d7d4eedd
  11. 03 10月, 2012 2 次提交
  12. 20 9月, 2012 3 次提交
  13. 25 8月, 2012 1 次提交
    • C
      drm/i915: Avoid unbinding due to an interrupted pin_and_fence during execbuffer · 7788a765
      Chris Wilson 提交于
      If we need to stall in order to complete the pin_and_fence operation
      during execbuffer reservation, there is a high likelihood that the
      operation will be interrupted by a signal (thanks X!). In order to
      simplify the cleanup along that error path, the object was
      unconditionally unbound and the error propagated. However, being
      interrupted here is far more common than I would like and so we can
      strive to avoid the extra work by eliminating the forced unbind.
      
      v2: In discussion over the indecent colour of the new functions and
      unwind path, we realised that we can use the new unreserve function to
      clean up the code even further.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      7788a765
  14. 24 8月, 2012 2 次提交
  15. 21 8月, 2012 1 次提交
    • C
      drm/i915: Track unbound pages · 6c085a72
      Chris Wilson 提交于
      When dealing with a working set larger than the GATT, or even the
      mappable aperture when touching through the GTT, we end up with evicting
      objects only to rebind them at a new offset again later. Moving an
      object into and out of the GTT requires clflushing the pages, thus
      causing a double-clflush penalty for rebinding.
      
      To avoid having to clflush on rebinding, we can track the pages as they
      are evicted from the GTT and only relinquish those pages on memory
      pressure.
      
      As usual, if it were not for the handling of out-of-memory condition and
      having to manually shrink our own bo caches, it would be a net reduction
      of code. Alas.
      
      Note: The patch also contains a few changes to the last-hope
      evict_everything logic in i916_gem_execbuffer.c - we no longer try to
      only evict the purgeable stuff in a first try (since that's superflous
      and only helps in OOM corner-cases, not fragmented-gtt trashing
      situations).
      
      Also, the extraction of the get_pages retry loop from bind_to_gtt (and
      other callsites) to get_pages should imo have been a separate patch.
      
      v2: Ditch the newly added put_pages (for unbound objects only) in
      i915_gem_reset. A quick irc discussion hasn't revealed any important
      reason for this, so if we need this, I'd like to have a git blame'able
      explanation for it.
      
      v3: Undo the s/drm_malloc_ab/kmalloc/ in get_pages that Chris noticed.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: Split out code movements and rant a bit in the commit message
      with a few Notes. Done v2]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      6c085a72
  16. 06 8月, 2012 1 次提交
  17. 26 7月, 2012 7 次提交
  18. 25 7月, 2012 1 次提交
  19. 20 7月, 2012 1 次提交
    • C
      drm/i915: Insert a flush between batches if the breadcrumb was dropped · 09cf7c9a
      Chris Wilson 提交于
      If we drop the breadcrumb request after a batch due to a signal for
      example we aim to fix it up at the next opportunity. In this case we
      emit a second batchbuffer with no waits upon the first and so no
      opportunity to insert the missing request, so we need to emit the
      missing flush for coherency. (Note that that invalidating the render
      cache is the same as flushing it, so there should have been no
      observable corruption.)
      
      Note that beside simply adding the missing flush, avoiding potential
      render corruption, this will also fix at least parts of the problem
      introduced by some funny interaction of these two commits:
      
      commit de2b9985
      Author: Daniel Vetter <daniel.vetter@ffwll.ch>
      Date:   Wed Jul 4 22:52:50 2012 +0200
      
          drm/i915: don't return a spurious -EIO from intel_ring_begin
      
      which allowed intel_ring_begin to return -ERESTARTSYS and
      
      commit cc889e0f
      Author: Daniel Vetter <daniel.vetter@ffwll.ch>
      Date:   Wed Jun 13 20:45:19 2012 +0200
      
          drm/i915: disable flushing_list/gpu_write_list
      
      which essentially disabled the flushing list.
      
      The issue happens when we submit a batch & emit it, but get
      interrupted (thanks to the first patch) while trying to emit the
      flush. On the next batch we still assume that the full gpu domain
      handling is in effect and hence compute the invalidate&flushing
      domains. But thanks to the 2nd patch we totally ignore these and only
      invalidate all gpu domains, presuming that any required flushes have
      been issued already.  Which is wrong and eventually results in us
      updating the new write_domain values with the computed
      pending_write_domain values, which leaves an object with write_domain
      == 0 on the gpu_write_list.
      
      As soon as we try to unbind that object, things blow up.
      
      Fix this by emitting the missing flush according to the new
      ring->gpu_caches_dirty flag.
      
      Note that this does _not_ fix all the current cases where we end up
      with an object on the flushing_list that can't be flushed.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=52040Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: Add bug explanation to commit message.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      09cf7c9a
  20. 20 6月, 2012 1 次提交
    • D
      drm/i915: disable flushing_list/gpu_write_list · cc889e0f
      Daniel Vetter 提交于
      This is just the minimal patch to disable all this code so that we can
      do decent amounts of QA before we rip it all out.
      
      The complicating thing is that we need to flush the gpu caches after
      the batchbuffer is emitted. Which is past the point of no return where
      execbuffer can't fail any more (otherwise we risk submitting the same
      batch multiple times).
      
      Hence we need to add a flag to track whether any caches associated
      with that ring are dirty. And emit the flush in add_request if that's
      the case.
      
      Note that this has a quite a few behaviour changes:
      - Caches get flushed/invalidated unconditionally.
      - Invalidation now happens after potential inter-ring sync.
      
      I've bantered around a bit with Chris on irc whether this fixes
      anything, and it might or might not. The only thing clear is that with
      these changes it's much easier to reason about correctness.
      
      Also rip out a lone get_next_request_seqno in the execbuffer
      retire_commands function. I've dug around and I couldn't figure out
      why that is still there, with the outstanding lazy request stuff it
      shouldn't be necessary.
      
      v2: Chris Wilson complained that I also invalidate the read caches
      when flushing after a batchbuffer. Now optimized.
      
      v3: Added some comments to explain the new flushing behaviour.
      
      Cc: Eric Anholt <eric@anholt.net>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      cc889e0f
  21. 14 6月, 2012 1 次提交
    • B
      drm/i915/context: switch contexts with execbuf2 · 6e0a69db
      Ben Widawsky 提交于
      Use the rsvd1 field in execbuf2 to specify the context ID associated
      with the workload. This will allow the driver to do the proper context
      switch when/if needed.
      
      v2: Add checks for context switches on rings not supporting contexts.
      Before the code would silently ignore such requests.
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      6e0a69db
  22. 20 5月, 2012 1 次提交
  23. 08 5月, 2012 1 次提交
    • C
      drm/i915: Limit calling mark-busy only for potential scanouts · acb87dfb
      Chris Wilson 提交于
      The principle of intel_mark_busy() is that we want to spot the
      transition of when the display engine is being used in order to bump
      powersaving modes and increase display clocks. As such it is only
      important when the display is changing, i.e. when rendering to the
      scanout or other sprite/plane, and these are characterised by being
      pinned.
      
      v2: Mark the whole device as busy on execbuffer and pageflips as well
      and rebase against dinq for the minor bug fix to be immediately
      applicable.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: fix compile fail.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      acb87dfb
  24. 03 5月, 2012 1 次提交