1. 20 1月, 2013 1 次提交
    • B
      drm/i915: Remove use of gtt_mappable_entries · 93d18799
      Ben Widawsky 提交于
      Mappable_end, ie. size is almost always what you want as opposed to the
      number of entries. Since we already have that information, we can scrap
      the number of entries and only calculate it when needed.
      
      If gtt_start is !0, this will have slightly different behavior. This
      difference can only occur in DRI1, and exists when we try to kick out
      the firmware fb. The new code seems like a bugfix to me.
      
      The other case where we've changed the behavior is during init we check
      the mappable region against our current known upper and lower limits
      (64MB, and 512MB). This now matches the comment, and makes things more
      convenient after removing gtt_mappable_entries.
      
      Also worth noting is the setting of mappable_end is taken out of setup
      because we do it earlier now in the DRI2 case and therefore need to add
      that tiny hunk to support the DRI1 IOCTL.
      
      v2: Move up mappable end to before legacy AGP init
      
      v3: Add the dev_priv inclusion here from previous rebase error in patch
      5
      
      Reviewed-by: Rodrigo Vivi <rodrigo.vivi@gmail.com> (v2)
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      [danvet: squash in fix for a printk format flag mismatch warning.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      93d18799
  2. 18 1月, 2013 6 次提交
  3. 20 12月, 2012 3 次提交
    • B
      drm/i915: Move even more gtt code to i915_gem_gtt · d7e5008f
      Ben Widawsky 提交于
      This really should have been part of the kill agp series.
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      d7e5008f
    • D
      drm/i915: disable shrinker lock stealing for create_mmap_offset · da494d7c
      Daniel Vetter 提交于
      The mmap offset structure is not part of the drm/i915 code, but
      provided by gem helpers. To avoid leaky abstractions (by either
      depending upon implementation details of said helper wrt to
      preallocations, or reimplementing it in our code and so fuzzing
      around in internal details of that helpr) simply disable
      the shrinker lock stealing accross calls into the helper functions.
      
      This should fix igt/gem_tiled_swapping.
      
      v2: Fix cleanup path confusion bemoaned by Chris Wilson.
      Reported-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      da494d7c
    • D
      drm/i915: optionally disable shrinker lock stealing · 677feac2
      Daniel Vetter 提交于
      commit 5774506f
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Wed Nov 21 13:04:04 2012 +0000
      
          drm/i915: Borrow our struct_mutex for the direct reclaim
      
      added a nice trick to steal the struct_mutex lock in the shrinker if
      it's the current task holding it. But this also caused the requirement
      that every place which allocates memory needs to be careful about the
      gem state of objects, since the shrinker could have pulled the rug out
      from under it. We've usually solved this by carefully preallocating
      things or ensure that buffers are pinned already.
      
      But the shrinker also reaps mmap offset, so allocating those needs to
      be careful, too. Now that code has been factored out into some common
      helpers, so either we have fragile code depending upon the common
      helper not doing something we don't want it to do. Or we need to
      reimplement the mmap offset creation and so also leak implementation
      details into our code.
      
      Since this all results in leaky abstraction, cop out by disabling the
      lock borrowing trick while calling down into the helpers. That way our
      craziness is nicely confined to files in drm/i915.
      
      v2: Split out the change to create_mmap_offset as request by Chris Wilson.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      677feac2
  4. 19 12月, 2012 6 次提交
  5. 17 12月, 2012 1 次提交
  6. 11 12月, 2012 4 次提交
  7. 07 12月, 2012 1 次提交
  8. 06 12月, 2012 1 次提交
  9. 04 12月, 2012 3 次提交
  10. 01 12月, 2012 2 次提交
  11. 29 11月, 2012 7 次提交
    • D
      drm/i915: optimize the shmem_pwrite slowpath handling · 8dcf015e
      Daniel Vetter 提交于
      Since we drop dev->struct_mutex when going through the slowpath, the
      object might have been moved out of the cpu domain. Hence we need to
      clflush the entire object to ensure that after the ioctl returns,
      everything is coherent again (interwoven writes are ill-defined
      anyway).
      
      But we only need to do this if we start in the cpu domain and the
      object requires flushing for coherency. So don't do the flushing if
      the object is coherent anyway or if we've done in-line clfushing
      already.
      
      v2: i915_gem_clflush_object already checks whether the object is
      coherent and if so, drops the flushing. Hence we don't need to check
      that ourselves, simplifying the condition.
      
      v3: Reorder the checks for better clarity (and adjust the comment
      accordingly), suggested by Chris Wilson.
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      8dcf015e
    • D
      drm/i915: simplify shmem pwrite/pread slowpath handling · a39a6805
      Daniel Vetter 提交于
      The shmem paths for pwrite/pread used a clever trick to hold onto a
      single page when dropping the big dev->struct_mutex for the slowpath.
      But this ran the risk of reinstating (or not completely purging) the
      backing storage when dropping purgeable objects.
      
      Hence the code needed to keep track of whether it ever dropped the
      lock, and if it did, manually check whether it needs to re-purge the
      backing storage. But thanks to the pages pin count introduced in
      
      commit a5570178
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Tue Sep 4 21:02:54 2012 +0100
      
          drm/i915: Pin backing pages whilst exporting through a dmabuf vmap
      
      which allowed us to pin the backing storage and remove that page
      reference trick from shmem_pwrite/read in
      
      commit f60d7f0c
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Tue Sep 4 21:02:56 2012 +0100
      
          drm/i915: Pin backing pages for pread
      
      and
      
      commit 755d2218
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Tue Sep 4 21:02:55 2012 +0100
      
          drm/i915: Pin backing pages for pwrite
      
      we can now abolish this check. The slowpath cleanup completely
      disappears from pread, and for pwrite we're only left with the domain
      fixup in case someone moved the object out of the cpu domain from
      under us. A follow-on patch will optimize that a notch more.
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a39a6805
    • M
      drm/i915: Set sync_seqno properly after seqno wrap · 7b01e260
      Mika Kuoppala 提交于
      i915_gem_handle_seqno_wrap() will zero all sync_seqnos but as the
      wrap can happen inside ring->sync_to(), pre wrap seqno was
      carried over and overwrote the zeroed sync_seqno.
      
      When wrap is handled, all outstanding requests will be retired and
      objects moved to inactive queue, causing their last_read_seqno to be zero.
      Use this to update the sync_seqno correctly.
      
      RING_SYNC registers after wrap will contain pre wrap values which
      are >= seqno. So injecting the semaphore wait into ring completes
      immediately.
      
      Original idea for using last_read_seqno from Chris Wilson.
      Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      7b01e260
    • C
      drm/i915: Rearrange code to only have a single method for waiting upon the ring · 3e960501
      Chris Wilson 提交于
      Replace the wait for the ring to be clear with the more common wait for
      the ring to be idle. The principle advantage is one less exported
      intel_ring_wait function, and the removal of a hardcoded value.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      3e960501
    • C
      drm/i915: Simplify flushing activity on the ring · b662a066
      Chris Wilson 提交于
      As we now always preallocate the seqno before writing to the ring, we
      can trivially test if we have any pending activity on the ring by
      inspecting the olr. This makes it then possible to flush operations that
      are not normally associated with a request, like power-management.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b662a066
    • C
      drm/i915: Preallocate next seqno before touching the ring · 9d773091
      Chris Wilson 提交于
      Based on the work by Mika Kuoppala, we realised that we need to handle
      seqno wraparound prior to committing our changes to the ring. The most
      obvious point then is to grab the seqno inside intel_ring_begin(), and
      then to reuse that seqno for all ring operations until the next request.
      As intel_ring_begin() can fail, the callers must already be prepared to
      handle such failure and so we can safely add further checks.
      
      This patch looks like it should be split up into the interface
      changes and the tweaks to move seqno wrapping from the execbuffer into
      the core seqno increment. However, I found no easy way to break it into
      incremental steps without introducing further broken behaviour.
      
      v2: Mika found a silly mistake and a subtle error in the existing code;
      inside i915_gem_retire_requests() we were resetting the sync_seqno of
      the target ring based on the seqno from this ring - which are only
      related by the order of their allocation, not retirement. Hence we were
      applying the optimisation that the rings were synchronised too early,
      fortunately the only real casualty there is the handling of seqno
      wrapping.
      
      v3: Do not forget to reset the sync_seqno upon module reinitialisation,
      ala resume.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=863861
      Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> [v2]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      9d773091
    • C
      drm/i915: Wait upon the last request seqno, rather than a future seqno · b5d17794
      Chris Wilson 提交于
      In commit 69c2fc89
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Fri Jul 20 12:41:03 2012 +0100
      
          drm/i915: Remove the per-ring write list
      
      the explicit flush was removed from i915_ring_idle(). However, we
      continued to wait upon the next seqno which now did not correspond to
      any request (except for the unusual condition of a failure to queue a
      request after execbuffer) and so would wait indefinitely.
      
      This has an important side-effect that i915_gpu_idle() does not cause
      the seqno to be incremented. This is vital if we are to be able to idle
      the GPU to handle seqno wraparound, as in subsequent patches.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b5d17794
  12. 22 11月, 2012 4 次提交
  13. 12 11月, 2012 1 次提交