1. 08 8月, 2013 2 次提交
    • B
      drm/i915: Use new bind/unbind in eviction code · f6cd1f15
      Ben Widawsky 提交于
      Eviction code, like the rest of the converted code needs to be aware of
      the address space for which it is evicting (or the everything case, all
      addresses). With the updated bind/unbind interfaces of the last patch,
      we can now safely move the eviction code over.
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      f6cd1f15
    • B
      drm/i915: plumb VM into bind/unbind code · 07fe0b12
      Ben Widawsky 提交于
      As alluded to in several patches, and it will be reiterated later... A
      VMA is an abstraction for a GEM BO bound into an address space.
      Therefore it stands to reason, that the existing bind, and unbind are
      the ones which will be the most impacted. This patch implements this,
      and updates all callers which weren't already updated in the series
      (because it was too messy).
      
      This patch represents the bulk of an earlier, larger patch. I've pulled
      out a bunch of things by the request of Daniel. The history is preserved
      for posterity with the email convention of ">" One big change from the
      original patch aside from a bunch of cropping is I've created an
      i915_vma_unbind() function. That is because we always have the VMA
      anyway, and doing an extra lookup is useful. There is a caveat, we
      retain an i915_gem_object_ggtt_unbind, for the global cases which might
      not talk in VMAs.
      
      > drm/i915: plumb VM into object operations
      >
      > This patch was formerly known as:
      > "drm/i915: Create VMAs (part 3) - plumbing"
      >
      > This patch adds a VM argument, bind/unbind, and the object
      > offset/size/color getters/setters. It preserves the old ggtt helper
      > functions because things still need, and will continue to need them.
      >
      > Some code will still need to be ported over after this.
      >
      > v2: Fix purge to pick an object and unbind all vmas
      > This was doable because of the global bound list change.
      >
      > v3: With the commit to actually pin/unpin pages in place, there is no
      > longer a need to check if unbind succeeded before calling put_pages().
      > Make put_pages only BUG() after checking pin count.
      >
      > v4: Rebased on top of the new hangcheck work by Mika
      > plumbed eb_destroy also
      > Many checkpatch related fixes
      >
      > v5: Very large rebase
      >
      > v6:
      > Change BUG_ON to WARN_ON (Daniel)
      > Rename vm to ggtt in preallocate stolen, since it is always ggtt when
      > dealing with stolen memory. (Daniel)
      > list_for_each will short-circuit already (Daniel)
      > remove superflous space (Daniel)
      > Use per object list of vmas (Daniel)
      > Make obj_bound_any() use obj_bound for each vm (Ben)
      > s/bind_to_gtt/bind_to_vm/ (Ben)
      >
      > Fixed up the inactive shrinker. As Daniel noticed the code could
      > potentially count the same object multiple times. While it's not
      > possible in the current case, since 1 object can only ever be bound into
      > 1 address space thus far - we may as well try to get something more
      > future proof in place now. With a prep patch before this to switch over
      > to using the bound list + inactive check, we're now able to carry that
      > forward for every address space an object is bound into.
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      [danvet: Rebase on top of the loss of "drm/i915: Cleanup more of VMA
      in destroy".]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      07fe0b12
  2. 06 8月, 2013 1 次提交
    • B
      drm/i915: Make proper functions for VMs · a70a3148
      Ben Widawsky 提交于
      Earlier in the conversion sequence we attempted to quickly wedge in the
      transitional interface as static inlines.
      
      Now that we're sure these interfaces are sane, for easier debug and to
      decrease code size (since many of these functions may be called quite a
      bit), make them real functions
      
      While at it, kill off the set_color interface. We'll always have the
      VMA, or easily get to it.
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a70a3148
  3. 18 7月, 2013 3 次提交
    • B
      drm/i915: Create VMAs · 2f633156
      Ben Widawsky 提交于
      Formerly: "drm/i915: Create VMAs (part 1)"
      
      In a previous patch, the notion of a VM was introduced. A VMA describes
      an area of part of the VM address space. A VMA is similar to the concept
      in the linux mm. However, instead of representing regular memory, a VMA
      is backed by a GEM BO. There may be many VMAs for a given object, one
      for each VM the object is to be used in. This may occur through flink,
      dma-buf, or a number of other transient states.
      
      Currently the code depends on only 1 VMA per object, for the global GTT
      (and aliasing PPGTT). The following patches will address this and make
      the rest of the infrastructure more suited
      
      v2: s/i915_obj/i915_gem_obj (Chris)
      
      v3: Only move an object to the now global unbound list if there are no
      more VMAs for the object which are bound into a VM (ie. the list is
      empty).
      
      v4: killed obj->gtt_space
      some reworks due to rebase
      
      v5: Free vma on error path (Imre)
      
      v6: Another missed vma free in i915_gem_object_bind_to_gtt error path
      (Imre)
      Fixed vma freeing in stolen preallocation (Imre)
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: NImre Deak <imre.deak@intel.com>
      [danvet: Squash in fixup from Ben to not deref a non-existing vma in
      set_cache_level, reported by Chris.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      2f633156
    • B
      drm/i915: Move active/inactive lists to new mm · 5cef07e1
      Ben Widawsky 提交于
      Shamelessly manipulated out of Daniel :-)
      "When moving the lists around explain that the active/inactive stuff is
      used by eviction when we run out of address space, so needs to be
      per-vma and per-address space. Bound/unbound otoh is used by the
      shrinker which only cares about the amount of memory used and not one
      bit about in which address space this memory is all used in. Of course
      to actual kick out an object we need to unbind it from every address
      space, but for that we have the per-object list of vmas."
      
      v2: Leave the bound list as a global one. (Chris, indirectly)
      
      v3: Rebased with no i915_gtt_vm. In most places I added a new *vm local,
      since it will eventually be replaces by a vm argument.
      Put comment back inline, since it no longer makes sense to do otherwise.
      
      v4: Rebased on hangcheck/error state movement
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: NImre Deak <imre.deak@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      5cef07e1
    • B
      drm/i915: Put the mm in the parent address space · 93bd8649
      Ben Widawsky 提交于
      Every address space should support object allocation. It therefore makes
      sense to have the allocator be part of the "superclass" which GGTT and
      PPGTT will derive.
      
      Since our maximum address space size is only 2GB we're not yet able to
      avoid doing allocation/eviction; but we'd hope one day this becomes
      almost irrelvant.
      
      v2: Rebased
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: NImre Deak <imre.deak@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      93bd8649
  4. 09 7月, 2013 1 次提交
    • B
      drm/i915: Embed drm_mm_node in i915 gem obj · c6cfb325
      Ben Widawsky 提交于
      Embedding the node in the obj is more natural in the transition to VMAs
      which will also have embedded nodes. This change also helps transition
      away from put_block to remove node.
      
      Though it's quite an uncommon occurrence, it's somewhat convenient to not
      fail at bind time because we cannot allocate the node. Though in
      practice there are other allocations (like the request structure) which
      would probably make this point not terribly useful.
      
      Quoting Daniel:
      Note that the only difference between put_block and remove_node is
      that the former fills up the preallocation cache. Which we don't need
      anyway and hence is just wasted space.
      
      v2: Clean up the stolen preallocation code.
      Rebased on the reserve_node patches
      renames ggtt_ stuff to gtt_ stuff
      WARN_ON if the object is already bound (which doesn't mean it's in the
      bound list, tricky)
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      c6cfb325
  5. 18 1月, 2013 1 次提交
    • B
      drm/i915: Create a gtt structure · 5d4545ae
      Ben Widawsky 提交于
      The purpose of the gtt structure is to help isolate our gtt specific
      properties from the rest of the code (in doing so it help us finish the
      isolation from the AGP connection).
      
      The following members are pulled out (and renamed):
      gtt_start
      gtt_total
      gtt_mappable_end
      gtt_mappable
      gtt_base_addr
      gsm
      
      The gtt structure will serve as a nice place to put gen specific gtt
      routines in upcoming patches. As far as what else I feel belongs in this
      structure: it is meant to encapsulate the GTT's physical properties.
      This is why I've not added fields which track various drm_mm properties,
      or things like gtt_mtrr (which is itself a pretty transient field).
      Reviewed-by: NRodrigo Vivi <rodrigo.vivi@gmail.com>
      [Ben modified commit messages]
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      5d4545ae
  6. 03 10月, 2012 2 次提交
  7. 24 8月, 2012 1 次提交
  8. 21 8月, 2012 1 次提交
    • C
      drm/i915: Track unbound pages · 6c085a72
      Chris Wilson 提交于
      When dealing with a working set larger than the GATT, or even the
      mappable aperture when touching through the GTT, we end up with evicting
      objects only to rebind them at a new offset again later. Moving an
      object into and out of the GTT requires clflushing the pages, thus
      causing a double-clflush penalty for rebinding.
      
      To avoid having to clflush on rebinding, we can track the pages as they
      are evicted from the GTT and only relinquish those pages on memory
      pressure.
      
      As usual, if it were not for the handling of out-of-memory condition and
      having to manually shrink our own bo caches, it would be a net reduction
      of code. Alas.
      
      Note: The patch also contains a few changes to the last-hope
      evict_everything logic in i916_gem_execbuffer.c - we no longer try to
      only evict the purgeable stuff in a first try (since that's superflous
      and only helps in OOM corner-cases, not fragmented-gtt trashing
      situations).
      
      Also, the extraction of the get_pages retry loop from bind_to_gtt (and
      other callsites) to get_pages should imo have been a separate patch.
      
      v2: Ditch the newly added put_pages (for unbound objects only) in
      i915_gem_reset. A quick irc discussion hasn't revealed any important
      reason for this, so if we need this, I'd like to have a git blame'able
      explanation for it.
      
      v3: Undo the s/drm_malloc_ab/kmalloc/ in get_pages that Chris noticed.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [danvet: Split out code movements and rant a bit in the commit message
      with a few Notes. Done v2]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      6c085a72
  9. 26 7月, 2012 2 次提交
  10. 16 7月, 2012 1 次提交
    • C
      drm: Add colouring to the range allocator · 6b9d89b4
      Chris Wilson 提交于
      In order to support snoopable memory on non-LLC architectures (so that
      we can bind vgem objects into the i915 GATT for example), we have to
      avoid the prefetcher on the GPU from crossing memory domains and so
      prevent allocation of a snoopable PTE immediately following an uncached
      PTE. To do that, we need to extend the range allocator with support for
      tracking and segregating different node colours.
      
      This will be used by i915 to segregate memory domains within the GTT.
      
      v2: Now with more drm_mm helpers and less driver interference.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Dave Airlie <airlied@redhat.com
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@gmail.com>
      6b9d89b4
  11. 20 5月, 2012 1 次提交
    • C
      drm/i915: Introduce for_each_ring() macro · b4519513
      Chris Wilson 提交于
      In many places we wish to iterate over the rings associated with the
      GPU, so refactor them to use a common macro.
      
      Along the way, there are a few code removals that should be side-effect
      free and some rearrangement which should only have a cosmetic impact,
      such as error-state.
      
      Note that this slightly changes the semantics in the hangcheck code:
      We now always cycle through all enabled rings instead of
      short-circuiting the logic.
      
      v2: Pull in a couple of suggestions from Ben and Daniel for
      intel_ring_initialized() and not removing the warning (just moving them
      to a new home, closer to the error).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NBen Widawsky <ben@bwidawsk.net>
      [danvet: Added note to commit message about the small behaviour
      change, suggested by Ben Widawsky.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b4519513
  12. 03 5月, 2012 3 次提交
  13. 28 2月, 2012 2 次提交
  14. 26 1月, 2012 1 次提交
  15. 20 9月, 2011 1 次提交
  16. 07 2月, 2011 1 次提交
  17. 12 1月, 2011 1 次提交
  18. 26 11月, 2010 1 次提交
  19. 24 11月, 2010 1 次提交
  20. 31 10月, 2010 1 次提交
  21. 29 10月, 2010 1 次提交
  22. 28 10月, 2010 1 次提交
  23. 22 10月, 2010 1 次提交
  24. 20 10月, 2010 2 次提交
  25. 01 10月, 2010 1 次提交
  26. 29 9月, 2010 1 次提交
  27. 21 9月, 2010 1 次提交
    • C
      drm/i915: Hold a reference to the object whilst unbinding the eviction list · af626103
      Chris Wilson 提交于
      During heavy aperture thrashing we may be forced to wait upon several active
      objects during eviction. The active list may be the last reference to
      these objects and so the action of waiting upon one of them may cause
      another to be freed (and itself unbound). To prevent the object
      disappearing underneath us, we need to acquire and hold a reference
      whilst unbinding.
      
      This should fix the reported page refcount OOPS:
      
      kernel BUG at drivers/gpu/drm/i915/i915_gem.c:1444!
      ...
      RIP: 0010:[<ffffffffa0093026>]  [<ffffffffa0093026>] i915_gem_object_put_pages+0x25/0xf5 [i915]
      Call Trace:
       [<ffffffffa009481d>] i915_gem_object_unbind+0xc5/0x1a7 [i915]
       [<ffffffffa0098ab2>] i915_gem_evict_something+0x3bd/0x409 [i915]
       [<ffffffffa0027923>] ? drm_gem_object_lookup+0x27/0x57 [drm]
       [<ffffffffa0093bc3>] i915_gem_object_bind_to_gtt+0x1d3/0x279 [i915]
       [<ffffffffa0095b30>] i915_gem_object_pin+0xa3/0x146 [i915]
       [<ffffffffa0027948>] ? drm_gem_object_lookup+0x4c/0x57 [drm]
       [<ffffffffa00961bc>] i915_gem_do_execbuffer+0x50d/0xe32 [i915]
      Reported-by: NShawn Starr <shawn.starr@rogers.com>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=18902Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      af626103
  28. 08 9月, 2010 1 次提交
    • C
      drm/i915: Kill the active list spinlock · de227ef0
      Chris Wilson 提交于
      This spinlock only served debugging purposes in a time when we could not
      be sure of the mutex ever being released upon a GPU hang. As we now
      should be able rely on hangcheck to do the job for us (and that error
      reporting should not itself require the struct mutex) we can kill the
      incomplete attempt at protection.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      de227ef0
  29. 10 8月, 2010 2 次提交