1. 19 12月, 2015 2 次提交
    • C
      drm/i915: Limit the busy wait on requests to 5us not 10ms! · ca5b721e
      Chris Wilson 提交于
      When waiting for high frequency requests, the finite amount of time
      required to set up the irq and wait upon it limits the response rate. By
      busywaiting on the request completion for a short while we can service
      the high frequency waits as quick as possible. However, if it is a slow
      request, we want to sleep as quickly as possible. The tradeoff between
      waiting and sleeping is roughly the time it takes to sleep on a request,
      on the order of a microsecond. Based on measurements of synchronous
      workloads from across big core and little atom, I have set the limit for
      busywaiting as 10 microseconds. In most of the synchronous cases, we can
      reduce the limit down to as little as 2 miscroseconds, but that leaves
      quite a few test cases regressing by factors of 3 and more.
      
      The code currently uses the jiffie clock, but that is far too coarse (on
      the order of 10 milliseconds) and results in poor interactivity as the
      CPU ends up being hogged by slow requests. To get microsecond resolution
      we need to use a high resolution timer. The cheapest of which is polling
      local_clock(), but that is only valid on the same CPU. If we switch CPUs
      because the task was preempted, we can also use that as an indicator that
       the system is too busy to waste cycles on spinning and we should sleep
      instead.
      
      __i915_spin_request was introduced in
      commit 2def4ad9 [v4.2]
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Tue Apr 7 16:20:41 2015 +0100
      
           drm/i915: Optimistically spin for the request completion
      
      v2: Drop full u64 for unsigned long - the timer is 32bit wraparound safe,
      so we can use native register sizes on smaller architectures. Mention
      the approximate microseconds units for elapsed time and add some extra
      comments describing the reason for busywaiting.
      
      v3: Raise the limit to 10us
      v4: Now 5us.
      Reported-by: NJens Axboe <axboe@kernel.dk>
      Link: https://lkml.org/lkml/2015/11/12/621Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Eero Tamminen <eero.t.tamminen@intel.com>
      Cc: "Rantala, Valtteri" <valtteri.rantala@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1449833608-22125-3-git-send-email-chris@chris-wilson.co.uk
      ca5b721e
    • C
      drm/i915: Break busywaiting for requests on pending signals · 91b0c352
      Chris Wilson 提交于
      The busywait in __i915_spin_request() does not respect pending signals
      and so may consume the entire timeslice for the task instead of
      returning to userspace to handle the signal.
      
      In the worst case this could cause a delay in signal processing of 20ms,
      which would be a noticeable jitter in cursor tracking. If a higher
      resolution signal was being used, for example to provide fairness of a
      server timeslices between clients, we could expect to detect some
      unfairness between clients (i.e. some windows not updating as fast as
      others). This issue was noticed when inspecting a report of poor
      interactivity resulting from excessively high __i915_spin_request usage.
      
      Fixes regression from
      commit 2def4ad9 [v4.2]
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Tue Apr 7 16:20:41 2015 +0100
      
           drm/i915: Optimistically spin for the request completion
      
      v2: Try to assess the impact of the bug
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc; "Rogozhkin, Dmitry V" <dmitry.v.rogozhkin@intel.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Eero Tamminen <eero.t.tamminen@intel.com>
      Cc: "Rantala, Valtteri" <valtteri.rantala@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1449833608-22125-2-git-send-email-chris@chris-wilson.co.uk
      91b0c352
  2. 17 12月, 2015 1 次提交
  3. 12 12月, 2015 3 次提交
  4. 10 12月, 2015 2 次提交
  5. 09 12月, 2015 1 次提交
    • C
      drm/i915: Add soft-pinning API for execbuffer · 506a8e87
      Chris Wilson 提交于
      Userspace can pass in an offset that it presumes the object is located
      at. The kernel will then do its utmost to fit the object into that
      location. The assumption is that userspace is handling its own object
      locations (for example along with full-ppgtt) and that the kernel will
      rarely have to make space for the user's requests.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      
      v2: Fixed incorrect eviction found by Michal Winiarski - fix suggested by Chris
      Wilson.  Fixed incorrect error paths causing crash found by Michal Winiarski.
      (Not published externally)
      
      v3: Rebased because of trivial conflict in object_bind_to_vm.  Fixed eviction
      to allow eviction of soft-pinned objects when another soft-pinned object used
      by a subsequent execbuffer overlaps reported by Michal Winiarski.
      (Not published externally)
      
      v4: Moved soft-pinned objects to the front of ordered_vmas so that they are
      pinned first after an address conflict happens to avoid repeated conflicts in
      rare cases (Suggested by Chris Wilson).  Expanded comment on
      drm_i915_gem_exec_object2.offset to cover this new API.
      
      v5: Added I915_PARAM_HAS_EXEC_SOFTPIN parameter for detecting this capability
      (Kristian). Added check for multiple pinnings on eviction (Akash). Made sure
      buffers are not considered misplaced without the user specifying
      EXEC_OBJECT_SUPPORTS_48B_ADDRESS.  User must assume responsibility for any
      addressing workarounds.  Updated object2.offset field comment again to clarify
      NO_RELOC case (Chris).  checkpatch cleanup.
      
      v6: Trivial rebase on latest drm-intel-nightly
      
      v7: Catch attempts to pin above the max virtual address size and return
      EINVAL (Tvrtko). Decouple EXEC_OBJECT_SUPPORTS_48B_ADDRESS and
      EXEC_OBJECT_PINNED flags, user must pass both flags in any attempt to pin
      something at an offset above 4GB (Chris, Daniel Vetter).
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Akash Goel <akash.goel@intel.com>
      Cc: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
      Cc: Michal Winiarski <michal.winiarski@intel.com>
      Cc: Zou Nanhai <nanhai.zou@intel.com>
      Cc: Kristian Høgsberg <hoegsberg@gmail.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Reviewed-by: NMichel Thierry <michel.thierry@intel.com>
      Acked-by: PDT
      Signed-off-by: NThomas Daniel <thomas.daniel@intel.com>
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1449575707-20933-1-git-send-email-thomas.daniel@intel.com
      506a8e87
  6. 05 12月, 2015 1 次提交
  7. 04 12月, 2015 1 次提交
    • C
      drm/i915: Fix RPS pointer passed from wait_ioctl to i915_wait_request · b6aa0873
      Chris Wilson 提交于
      In commit 2e1b8730 [v4.2]
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Mon Apr 27 13:41:22 2015 +0100
      
          drm/i915: Convert RPS tracking to a intel_rps_client struct
      
      we converted the __i915_wait_request() to take a new intel_rps_client
      struct (rather than having to pass fake drm_i915_file_private structs).
      However, due to use of passing a void pointer, I didn't spot one
      callsite in wait-ioctl was passing the wrong pointer.
      
      Fwiw, the impact of this bug is zero. Along the rps path, we always
      first call list_empty(rps) which when we pass in the wrong pointer
      always evaluates to false and we return early and never chase the
      invalid pointers.
      
      The user visible impact is then wait-ioctl doesn't get the same
      waitboosting as the other interfaces (set-domain, throttle), which is a
      performance concern for the *very* few users of the wait interface.
      There is also a libdrm_intel patch to use the wait-ioctl for
      drm_intel_bo_wait_rendering() if anyone feels inclined to review
      libdrm_intel patches.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      [danvet: Add Chris' explanation for why the impact of this is pretty
      close to 0.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b6aa0873
  8. 03 12月, 2015 1 次提交
    • N
      drm/i915: Extend LRC pinning to cover GPU context writeback · 6d65ba94
      Nick Hoath 提交于
      Use the first retired request on a new context to unpin
      the old context. This ensures that the hw context remains
      bound until it has been written back to by the GPU.
      Now that the context is pinned until later in the request/context
      lifecycle, it no longer needs to be pinned from context_queue to
      retire_requests.
      This fixes an issue with GuC submission where the GPU might not
      have finished writing back the context before it is unpinned. This
      results in a GPU hang.
      
      v2: Moved the new pin to cover GuC submission (Alex Dai)
          Moved the new unpin to request_retire to fix coverage leak
      v3: Added switch to default context if freeing a still pinned
          context just in case the hw was actually still using it
      v4: Unwrapped context unpin to allow calling without a request
      v5: Only create a switch to idle context if the ring doesn't
          already have a request pending on it (Alex Dai)
          Rename unsaved to dirty to avoid double negatives (Dave Gordon)
          Changed _no_req postfix to __ prefix for consistency (Dave Gordon)
          Split out per engine cleanup from context_free as it
          was getting unwieldy
          Corrected locking (Dave Gordon)
      v6: Removed some bikeshedding (Mika Kuoppala)
          Added explanation of the GuC hang that this fixes (Daniel Vetter)
      v7: Removed extra per request pinning from ring reset code (Alex Dai)
          Added forced ring unpin/clean in error case in context free (Alex Dai)
      Signed-off-by: NNick Hoath <nicholas.hoath@intel.com>
      Issue: VIZ-4277
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: David Gordon <david.s.gordon@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Alex Dai <yu.dai@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NAlex Dai <yu.dai@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      6d65ba94
  9. 01 12月, 2015 1 次提交
  10. 18 11月, 2015 2 次提交
  11. 16 11月, 2015 1 次提交
  12. 12 11月, 2015 1 次提交
  13. 07 11月, 2015 3 次提交
    • M
      mm, fs: introduce mapping_gfp_constraint() · c62d2555
      Michal Hocko 提交于
      There are many places which use mapping_gfp_mask to restrict a more
      generic gfp mask which would be used for allocations which are not
      directly related to the page cache but they are performed in the same
      context.
      
      Let's introduce a helper function which makes the restriction explicit and
      easier to track.  This patch doesn't introduce any functional changes.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Suggested-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c62d2555
    • M
      mm, page_alloc: rename __GFP_WAIT to __GFP_RECLAIM · 71baba4b
      Mel Gorman 提交于
      __GFP_WAIT was used to signal that the caller was in atomic context and
      could not sleep.  Now it is possible to distinguish between true atomic
      context and callers that are not willing to sleep.  The latter should
      clear __GFP_DIRECT_RECLAIM so kswapd will still wake.  As clearing
      __GFP_WAIT behaves differently, there is a risk that people will clear the
      wrong flags.  This patch renames __GFP_WAIT to __GFP_RECLAIM to clearly
      indicate what it does -- setting it allows all reclaim activity, clearing
      them prevents it.
      
      [akpm@linux-foundation.org: fix build]
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71baba4b
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  14. 02 11月, 2015 1 次提交
  15. 31 10月, 2015 1 次提交
  16. 29 10月, 2015 1 次提交
    • C
      drm/i915: Recover all available ringbuffer space following reset · 608c1a52
      Chris Wilson 提交于
      Having flushed all requests from all queues, we know that all
      ringbuffers must now be empty. However, since we do not reclaim
      all space when retiring the request (to prevent HEADs colliding
      with rapid ringbuffer wraparound) the amount of available space
      on each ringbuffer upon reset is less than when we start. Do one
      more pass over all the ringbuffers to reset the available space
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
      Cc: Arun Siluvery <arun.siluvery@linux.intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Cc: Dave Gordon <david.s.gordon@intel.com>
      608c1a52
  17. 23 10月, 2015 1 次提交
  18. 21 10月, 2015 2 次提交
  19. 13 10月, 2015 1 次提交
  20. 07 10月, 2015 1 次提交
  21. 06 10月, 2015 1 次提交
    • T
      drm/i915: Clean up associated VMAs on context destruction · e9f24d5f
      Tvrtko Ursulin 提交于
      Prevent leaking VMAs and PPGTT VMs when objects are imported
      via flink.
      
      Scenario is that any VMAs created by the importer will be left
      dangling after the importer exits, or destroys the PPGTT context
      with which they are associated.
      
      This is caused by object destruction not running when the
      importer closes the buffer object handle due the reference held
      by the exporter. This also leaks the VM since the VMA has a
      reference on it.
      
      In practice these leaks can be observed by stopping and starting
      the X server on a kernel with fbcon compiled in. Every time
      X server exits another VMA will be leaked against the fbcon's
      frame buffer object.
      
      Also on systems where flink buffer sharing is used extensively,
      like Android, this leak has even more serious consequences.
      
      This version is takes a general approach from the  earlier work
      by Rafael Barbalho (drm/i915: Clean-up PPGTT on context
      destruction) and tries to incorporate the subsequent discussion
      between Chris Wilson and Daniel Vetter.
      
      v2:
      
      Removed immediate cleanup on object retire - it was causing a
      recursive VMA unbind via i915_gem_object_wait_rendering. And
      it is in fact not even needed since by definition context
      cleanup worker runs only after the last context reference has
      been dropped, hence all VMAs against the VM belonging to the
      context are already on the inactive list.
      
      v3:
      
      Previous version could deadlock since VMA unbind waits on any
      rendering on an object to complete. Objects can be busy in a
      different VM which would mean that the cleanup loop would do
      the wait with the struct mutex held.
      
      This is an even simpler approach where we just unbind VMAs
      without waiting since we know all VMAs belonging to this VM
      are idle, and there is nothing in flight, at the point
      context destructor runs.
      
      v4:
      
      Double underscore prefix for __915_vma_unbind_no_wait and a
      commit message typo fix. (Michel Thierry)
      
      Note that this is just a partial/interim fix since we have a bit a
      fundamental issue with cleaning up, e.g.
      
      https://bugs.freedesktop.org/show_bug.cgi?id=87729Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Testcase: igt/gem_ppgtt.c/flink-and-exit-vma-leak
      Reviewed-by: NMichel Thierry <michel.thierry@intel.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Rafael Barbalho <rafael.barbalho@intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      [danvet: Add a note that this isn't everything.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      e9f24d5f
  22. 02 10月, 2015 1 次提交
    • M
      drm/i915: Wa32bitGeneralStateOffset & Wa32bitInstructionBaseOffset · 101b506a
      Michel Thierry 提交于
      There are some allocations that must be only referenced by 32-bit
      offsets. To limit the chances of having the first 4GB already full,
      objects not requiring this workaround use DRM_MM_SEARCH_BELOW/
      DRM_MM_CREATE_TOP flags
      
      In specific, any resource used with flat/heapless (0x00000000-0xfffff000)
      General State Heap (GSH) or Instruction State Heap (ISH) must be in a
      32-bit range, because the General State Offset and Instruction State
      Offset are limited to 32-bits.
      
      Objects must have EXEC_OBJECT_SUPPORTS_48B_ADDRESS flag to indicate if
      they can be allocated above the 32-bit address range. To limit the
      chances of having the first 4GB already full, objects will use
      DRM_MM_SEARCH_BELOW + DRM_MM_CREATE_TOP flags when possible.
      
      The libdrm user of the EXEC_OBJECT_SUPPORTS_48B_ADDRESS flag is here:
      http://lists.freedesktop.org/archives/intel-gfx/2015-September/075836.html
      
      v2: Changed flag logic from neeeds_32b, to supports_48b.
      v3: Moved 48-bit support flag back to exec_object. (Chris, Daniel)
      v4: Split pin flags into PIN_ZONE_4G and PIN_HIGH; update PIN_OFFSET_MASK
      to use last PIN_ defined instead of hard-coded value; use correct limit
      check in eb_vma_misplaced. (Chris)
      v5: Don't touch PIN_OFFSET_MASK and update workaround comment (Chris)
      v6: Apply pin-high for ggtt too (Chris)
      v7: Handle simultaneous pin-high and pin-mappable end correctly (Akash)
          Fix check for entries currently using +4GB addresses, use min_t and
          other polish in object_bind_to_vm (Chris)
      v8: Commit message updated to point to libdrm patch.
      v9: vmas are allocated in the correct ozone, so only check flag when the
          vma has not been allocated. (Chris)
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v4)
      Signed-off-by: NMichel Thierry <michel.thierry@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      101b506a
  23. 23 9月, 2015 1 次提交
    • M
      drm/i915/gtt: Do not initialize drm_mm twice. · a2cad9df
      Michał Winiarski 提交于
      It would be initialized just moments later by i915_init_vm.
      
      Rearrange the code such that i915_init_vm() is next to its callers
      inside i915_gem_gtt (and so we can make it static). After removing the
      dance around the files, it is clear that we are repeating some work
      inside the initializers (such as calling drm_mm_init() multiple times),
      so take advantage of the refactor to also remove some redundant code and
      clean up the interface.
      
      v2: Commit msg update,
          s/i915_init_vm/i915_address_space_init, move to i915_gem_gtt.c,
          init address_space during i915_gem_setup_global_gtt for ggtt.
      v3: Do not init global_link - we are adding it to vm_list moments later,
          make i915_address_space_init static, use OOP style parameter order.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com>
      Reviewed-by Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a2cad9df
  24. 17 9月, 2015 1 次提交
    • G
      drm/i915: fix kernel-doc warnings in i915_gem.c · d9072a3e
      Geliang Tang 提交于
      Fix the following 'make htmldocs' warnings:
      
        .//drivers/gpu/drm/i915/i915_gem.c:1729: warning: No description found for parameter 'vma'
        .//drivers/gpu/drm/i915/i915_gem.c:1729: warning: No description found for parameter 'vmf'
      
        .//drivers/gpu/drm/i915/i915_gem.c:4962: warning: No description found for parameter 'old'
        .//drivers/gpu/drm/i915/i915_gem.c:4962: warning: No description found for parameter 'new'
        .//drivers/gpu/drm/i915/i915_gem.c:4962: warning: No description found for parameter 'frontbuffer_bits'
      Signed-off-by: NGeliang Tang <geliangtang@163.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      d9072a3e
  25. 14 9月, 2015 2 次提交
    • N
      drm/i915: Split alloc from init for lrc · e84fe803
      Nick Hoath 提交于
      Extend init/init_hw split to context init.
         - Move context initialisation in to i915_gem_init_hw
         - Move one off initialisation for render ring to
              i915_gem_validate_context
         - Move default context initialisation to logical_ring_init
      
      Rename intel_lr_context_deferred_create to
      intel_lr_context_deferred_alloc, to reflect reduced functionality &
      alloc/init split.
      
      This patch is intended to split out the allocation of resources &
      initialisation to allow easier reuse of code for resume/gpu reset.
      
      v2: Removed function ptr wrapping of do_switch_context (Daniel Vetter)
          Left ->init_context int intel_lr_context_deferred_alloc
          (Daniel Vetter)
          Remove unnecessary init flag & ring type test. (Daniel Vetter)
          Improve commit message (Daniel Vetter)
      v3: On init/reinit, set the hw next sequence number to the sw next
          sequence number. This is set to 1 at driver load time. This prevents
          the seqno being reset on reinit (Chris Wilson)
      v4: Set seqno back to ~0 - 0x1000 at start-of-day, and increment by 0x100
          on reset.
          This makes it obvious which bbs are which after a reset. (David Gordon
          & John Harrison)
          Rebase.
      v5: Rebase. Fixed rebase breakage. Put context pinning in separate
          function. Removed code churn. (Thomas Daniel)
      v6: Cleanup up issues introduced in v2 & v5 (Thomas Daniel)
      
      Issue: VIZ-4798
      Signed-off-by: NNick Hoath <nicholas.hoath@intel.com>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: John Harrison <john.c.harrison@intel.com>
      Cc: David Gordon <david.s.gordon@intel.com>
      Cc: Thomas Daniel <thomas.daniel@intel.com>
      Reviewed-by: NThomas Daniel <thomas.daniel@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      e84fe803
    • J
      drm/i915: don't try to load GuC fw on pre-gen9 · 87bcdd2e
      Jesse Barnes 提交于
      This avoids some bad register writes and generally feels more correct
      than unconditionally trying to redirect interrupts and such.
      
      References: https://bugs.freedesktop.org/show_bug.cgi?id=91777Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      87bcdd2e
  26. 11 9月, 2015 1 次提交
  27. 26 8月, 2015 1 次提交
  28. 15 8月, 2015 4 次提交