1. 02 8月, 2017 1 次提交
  2. 11 7月, 2017 2 次提交
  3. 08 6月, 2017 2 次提交
    • P
      drm/i915/gvt: Trigger scheduling after context complete · f100daec
      Ping Gao 提交于
      The time based scheduler poll context busy status at every
      micro-second during vGPU switch, it will make GPU idle for a while
      when the context is very small and completed before the next
      micro-second arrival. Trigger scheduling immediately after context
      complete will eliminate GPU idle and improve performance.
      
      Create two vGPU with same type, run Heaven simultaneously:
      Before this patch:
       +---------+----------+----------+
       |         |  vGPU1   |   vGPU2  |
       +---------+----------+----------+
       |  Heaven |  357     |    354   |
       +-------------------------------+
      
      After this patch:
       +---------+----------+----------+
       |         |  vGPU1   |   vGPU2  |
       +---------+----------+----------+
       |  Heaven |  397     |    398   |
       +-------------------------------+
      
      v2: Let need_reschedule protect by gvt-lock.
      Signed-off-by: NPing Gao <ping.a.gao@intel.com>
      Signed-off-by: NWeinan Li <weinan.z.li@intel.com>
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      f100daec
    • C
      drm/i915/gvt: implement per-vm mmio switching optimization · 0e86cc9c
      Changbin Du 提交于
      Commit ab9da627906a ("drm/i915: make context status notifier head be
      per engine") gives us a chance to inspect every single request. Then
      we can eliminate unnecessary mmio switching for same vGPU. We only
      need mmio switching for different VMs (including host).
      
      This patch introduced a new general API intel_gvt_switch_mmio() to
      replace the old intel_gvt_load/restore_render_mmio(). This function
      can be further optimized for vGPU to vGPU switching.
      
      To support individual ring switch, we track the owner who occupy
      each ring. When another VM or host request a ring we do the mmio
      context switching. Otherwise no need to switch the ring.
      
      This optimization is very useful if only one guest has plenty of
      workloads and the host is mostly idle. The best case is no mmio
      switching will happen.
      
      v2:
        o fix missing ring switch issue. (chuanxiao)
        o support individual ring switch.
      Signed-off-by: NChangbin Du <changbin.du@intel.com>
      Reviewed-by: NChuanxiao Dong <chuanxiao.dong@intel.com>
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      0e86cc9c
  4. 04 5月, 2017 1 次提交
  5. 28 4月, 2017 1 次提交
    • J
      drm/i915: Sanitize engine context sizes · 63ffbcda
      Joonas Lahtinen 提交于
      Pre-calculate engine context size based on engine class and device
      generation and store it in the engine instance.
      
      v2:
      - Squash and get rid of hw_context_size (Chris)
      
      v3:
      - Move after MMIO init for probing on Gen7 and 8 (Chris)
      - Retained rounding (Tvrtko)
      v4:
      - Rebase for deferred legacy context allocation
      Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Oscar Mateo <oscar.mateo@intel.com>
      Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
      Cc: intel-gvt-dev@lists.freedesktop.org
      Acked-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      63ffbcda
  6. 13 4月, 2017 1 次提交
  7. 29 3月, 2017 1 次提交
  8. 22 3月, 2017 1 次提交
    • C
      drm/i915/gvt: Use force single submit flag to distinguish gvt request from i915 request · bc2d4b62
      Changbin Du 提交于
      In my previous Commit ab9da627906a ("drm/i915: make context status
      notifier head be per engine") rely on scheduler->current_workload[x]
      to distinguish gvt spacial request from i915 request. But this is
      not always true since no synchronization between workload_thread and
      lrc irq handler.
      
          lrc irq handler               workload_thread
               ----                          ----
        pick i915 requests;
                                      intel_vgpu_submit_execlist();
                                      current_workload[x] = xxx;
        shadow_context_status_change();
      
      Then current_workload[x] is not null but current request is of i915 self.
      So instead we check ctx flag CONTEXT_FORCE_SINGLE_SUBMISSION. Only gvt
      request set this flag and always set.
      
      v2: Reverse the order of multi-condition 'if' statement.
      
      Fixes: ab9da6279 ("drm/i915: make context status notifier head be per engine")
      Signed-off-by: NChangbin Du <changbin.du@intel.com>
      Reviewed-by: NYulei Zhang <yulei.zhang@intel.com>
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      bc2d4b62
  9. 21 3月, 2017 1 次提交
  10. 17 3月, 2017 6 次提交
  11. 06 3月, 2017 1 次提交
    • C
      drm/i915/gvt: handle workload lifecycle properly · 8f1117ab
      Chuanxiao Dong 提交于
      Currently i915 has a request replay mechanism which can make sure
      the request can be replayed after a GPU reset. With this mechanism,
      gvt should wait until the GVT request seqno passed before complete
      the current workload. So that there should be a context switch interrupt
      come before gvt free the workload. In this way, workload lifecylce
      matches with the i915 request lifecycle. The workload can only be freed
      after the request is completed.
      
      v2: use gvt_dbg_sched instead of gvt_err to print when wait again
      Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com>
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      8f1117ab
  12. 23 2月, 2017 1 次提交
  13. 14 2月, 2017 1 次提交
  14. 09 2月, 2017 1 次提交
  15. 09 1月, 2017 2 次提交
  16. 19 12月, 2016 1 次提交
  17. 25 11月, 2016 1 次提交
  18. 14 11月, 2016 1 次提交
    • P
      drm/i915/gvt: fix deadlock in workload_thread · 90d27a1b
      Pei Zhang 提交于
      It's a classical abba type deadlock when using 2 mutex objects, which
      are gvt.lock(a) and drm.struct_mutex(b). Deadlock happens in threads:
      1. intel_gvt_create/destroy_vgpu: P(a)->P(b)
      2. workload_thread: P(b)->P(a)
      
      Fix solution is align the lock acquire sequence in both threads. This
      patch choose to adjust the sequence in workload_thread function.
      
      This fixed lockup symptom for guest-reboot stress test.
      
      v2: adjust sequence in workload_thread based on zhenyu's suggestion.
          adjust sequence in create/destroy_vgpu function.
      v3: fix to still require struct_mutex for dispatch_workload()
      Signed-off-by: NPei Zhang <pei.zhang@intel.com>
      [zhenyuw: fix unused variables warnings.]
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      90d27a1b
  19. 10 11月, 2016 1 次提交
  20. 07 11月, 2016 1 次提交
    • Z
      drm/i915/gvt: Fix workload status after wait · 9b172345
      Zhenyu Wang 提交于
      From commit e95433c7, workload status setting
      was changed to only capture on error path, but we need to set it properly in
      normal path too, otherwise we'll fail to complete workload which could lead
      guest VM vGPU reset.
      
      v2: uses braces and add Fixes tag.
      
      Fixes: e95433c7 ("drm/i915: Rearrange i915_wait_request() accounting with callers")
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      9b172345
  21. 29 10月, 2016 1 次提交
  22. 27 10月, 2016 1 次提交
    • D
      drm/i915/gvt: fix nested sleeping issue · e45d7b7f
      Du, Changbin 提交于
      We cannot use blocking method mutex_lock inside a wait loop.
      Here we invoke pick_next_workload() which needs acquire a
      mutex in our "condition" experssion. Then we go into a another
      of the going-to-sleep sequence and changing the task state.
      This is a dangerous. Let's rewrite the wait sequence to avoid
      nested sleeping.
      
      v2: fix do...while loop exit condition (zhenyu)
      v3: rebase to gvt-staging branch
      Signed-off-by: NDu, Changbin <changbin.du@intel.com>
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      e45d7b7f
  23. 20 10月, 2016 6 次提交
  24. 18 10月, 2016 1 次提交
  25. 14 10月, 2016 3 次提交