1. 27 6月, 2017 2 次提交
  2. 23 6月, 2017 9 次提交
  3. 22 6月, 2017 2 次提交
  4. 21 6月, 2017 18 次提交
  5. 20 6月, 2017 4 次提交
  6. 19 6月, 2017 1 次提交
  7. 17 6月, 2017 2 次提交
  8. 16 6月, 2017 2 次提交
    • C
      drm/i915: Stash a pointer to the obj's resv in the vma · 95ff7c7d
      Chris Wilson 提交于
      During execbuf, a mandatory step is that we add this request (this
      fence) to each object's reservation_object. Inside execbuf, we track the
      vma, and to add the fence to the reservation_object then means having to
      first chase the obj, incurring another cache miss. We can reduce the
       number of cache misses by stashing a pointer to the reservation_object
      in the vma itself.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170616140525.6394-1-chris@chris-wilson.co.uk
      95ff7c7d
    • C
      drm/i915: Async GPU relocation processing · 7dd4f672
      Chris Wilson 提交于
      If the user requires patching of their batch or auxiliary buffers, we
      currently make the alterations on the cpu. If they are active on the GPU
      at the time, we wait under the struct_mutex for them to finish executing
      before we rewrite the contents. This happens if shared relocation trees
      are used between different contexts with separate address space (and the
      buffers then have different addresses in each), the 3D state will need
      to be adjusted between execution on each context. However, we don't need
      to use the CPU to do the relocation patching, as we could queue commands
      to the GPU to perform it and use fences to serialise the operation with
      the current activity and future - so the operation on the GPU appears
      just as atomic as performing it immediately. Performing the relocation
      rewrites on the GPU is not free, in terms of pure throughput, the number
      of relocations/s is about halved - but more importantly so is the time
      under the struct_mutex.
      
      v2: Break out the request/batch allocation for clearer error flow.
      v3: A few asserts to ensure rq ordering is maintained
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      7dd4f672