1. 21 6月, 2019 6 次提交
  2. 20 6月, 2019 1 次提交
    • C
      drm/i915/execlists: Preempt-to-busy · 22b7a426
      Chris Wilson 提交于
      When using a global seqno, we required a precise stop-the-workd event to
      handle preemption and unwind the global seqno counter. To accomplish
      this, we would preempt to a special out-of-band context and wait for the
      machine to report that it was idle. Given an idle machine, we could very
      precisely see which requests had completed and which we needed to feed
      back into the run queue.
      
      However, now that we have scrapped the global seqno, we no longer need
      to precisely unwind the global counter and only track requests by their
      per-context seqno. This allows us to loosely unwind inflight requests
      while scheduling a preemption, with the enormous caveat that the
      requests we put back on the run queue are still _inflight_ (until the
      preemption request is complete). This makes request tracking much more
      messy, as at any point then we can see a completed request that we
      believe is not currently scheduled for execution. We also have to be
      careful not to rewind RING_TAIL past RING_HEAD on preempting to the
      running context, and for this we use a semaphore to prevent completion
      of the request before continuing.
      
      To accomplish this feat, we change how we track requests scheduled to
      the HW. Instead of appending our requests onto a single list as we
      submit, we track each submission to ELSP as its own block. Then upon
      receiving the CS preemption event, we promote the pending block to the
      inflight block (discarding what was previously being tracked). As normal
      CS completion events arrive, we then remove stale entries from the
      inflight tracker.
      
      v2: Be a tinge paranoid and ensure we flush the write into the HWS page
      for the GPU semaphore to pick in a timely fashion.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190620142052.19311-1-chris@chris-wilson.co.uk
      22b7a426
  3. 19 6月, 2019 1 次提交
  4. 15 6月, 2019 2 次提交
    • C
      drm/i915: Replace engine->timeline with a plain list · 422d7df4
      Chris Wilson 提交于
      To continue the onslaught of removing the assumption of a global
      execution ordering, another casualty is the engine->timeline. Without an
      actual timeline to track, it is overkill and we can replace it with a
      much less grand plain list. We still need a list of requests inflight,
      for the simple purpose of finding inflight requests (for retiring,
      resetting, preemption etc).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190614164606.15633-3-chris@chris-wilson.co.uk
      422d7df4
    • C
      drm/i915: Keep contexts pinned until after the next kernel context switch · ce476c80
      Chris Wilson 提交于
      We need to keep the context image pinned in memory until after the GPU
      has finished writing into it. Since it continues to write as we signal
      the final breadcrumb, we need to keep it pinned until the request after
      it is complete. Currently we know the order in which requests execute on
      each engine, and so to remove that presumption we need to identify a
      request/context-switch we know must occur after our completion. Any
      request queued after the signal must imply a context switch, for
      simplicity we use a fresh request from the kernel context.
      
      The sequence of operations for keeping the context pinned until saved is:
      
       - On context activation, we preallocate a node for each physical engine
         the context may operate on. This is to avoid allocations during
         unpinning, which may be from inside FS_RECLAIM context (aka the
         shrinker)
      
       - On context deactivation on retirement of the last active request (which
         is before we know the context has been saved), we add the
         preallocated node onto a barrier list on each engine
      
       - On engine idling, we emit a switch to kernel context. When this
         switch completes, we know that all previous contexts must have been
         saved, and so on retiring this request we can finally unpin all the
         contexts that were marked as deactivated prior to the switch.
      
      We can enhance this in future by flushing all the idle contexts on a
      regular heartbeat pulse of a switch to kernel context, which will also
      be used to check for hung engines.
      
      v2: intel_context_active_acquire/_release
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190614164606.15633-1-chris@chris-wilson.co.uk
      ce476c80
  5. 14 6月, 2019 1 次提交
  6. 10 6月, 2019 1 次提交
  7. 29 5月, 2019 2 次提交
    • J
      Revert "drm/i915: Expand subslice mask" · a10f361d
      Jani Nikula 提交于
      This reverts commit 1ac159e2 ("drm/i915: Expand subslice mask"),
      which kills ICL due to GEM_BUG_ON() sanity checks before CI even gets a
      chance to do anything.
      
      The commit exposes an issue in commit 1e40d4ae ("drm/i915/cnl:
      Implement WaProgramMgsrForCorrectSliceSpecificMmioReads"), which will
      also need to be addressed.
      
      There's a proposed fix [1], but considering the seeming uncertainty with
      the fix as well as the size of the regressing commit (in this context,
      the one that actually brings down ICL), this warrants a revert to get
      ICL working, and gives us time to get all of this right without
      rushing. Even if this means shooting the messenger.
      
      <3>[    9.426327] intel_sseu_get_subslices:46 GEM_BUG_ON(slice >= sseu->max_slices)
      <4>[    9.426355] ------------[ cut here ]------------
      <2>[    9.426357] kernel BUG at drivers/gpu/drm/i915/gt/intel_sseu.c:46!
      <4>[    9.426371] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
      <4>[    9.426377] CPU: 1 PID: 364 Comm: systemd-udevd Not tainted 5.2.0-rc2-CI-CI_DRM_6159+ #1
      <4>[    9.426385] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3183.A00.1905020411 05/02/2019
      <4>[    9.426444] RIP: 0010:intel_sseu_get_subslices+0x8a/0xe0 [i915]
      <4>[    9.426452] Code: d5 76 b7 e0 48 8b 35 9d 24 21 00 49 c7 c0 07 f0 72 a0 b9 2e 00 00 00 48 c7 c2 00 8e 6d a0 48 c7 c7 a5 14 5b a0 e8 36 3c be e0 <0f> 0b 48 c7 c1 80 d5 6f a0 ba 30 00 00 00 48 c7 c6 00 8e 6d a0 48
      <4>[    9.426468] RSP: 0018:ffffc9000037b9c8 EFLAGS: 00010282
      <4>[    9.426475] RAX: 000000000000000f RBX: 0000000000000000 RCX: 0000000000000000
      <4>[    9.426482] RDX: 0000000000000001 RSI: 0000000000000008 RDI: ffff88849e346f98
      <4>[    9.426490] RBP: ffff88848a200000 R08: 0000000000000004 R09: ffff88849d50b000
      <4>[    9.426497] R10: 0000000000000000 R11: ffff88849e346f98 R12: ffff88848a209e78
      <4>[    9.426505] R13: 0000000003000000 R14: ffff88848a20b1a8 R15: 0000000000000000
      <4>[    9.426513] FS:  00007f73d5ae8680(0000) GS:ffff88849fc80000(0000) knlGS:0000000000000000
      <4>[    9.426521] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      <4>[    9.426527] CR2: 0000561417b01260 CR3: 0000000494764003 CR4: 0000000000760ee0
      <4>[    9.426535] PKRU: 55555554
      <4>[    9.426538] Call Trace:
      <4>[    9.426585]  wa_init_mcr+0xd5/0x110 [i915]
      <4>[    9.426597]  ? lock_acquire+0xa6/0x1c0
      <4>[    9.426645]  icl_gt_workarounds_init+0x21/0x1a0 [i915]
      <4>[    9.426694]  ? i915_driver_load+0xfcf/0x18a0 [i915]
      <4>[    9.426739]  gt_init_workarounds+0x14c/0x230 [i915]
      <4>[    9.426748]  ? _raw_spin_unlock_irq+0x24/0x50
      <4>[    9.426789]  intel_gt_init_workarounds+0x1b/0x30 [i915]
      <4>[    9.426835]  i915_driver_load+0xfd7/0x18a0 [i915]
      <4>[    9.426843]  ? lock_acquire+0xa6/0x1c0
      <4>[    9.426850]  ? __pm_runtime_resume+0x4f/0x80
      <4>[    9.426857]  ? _raw_spin_unlock_irqrestore+0x4c/0x60
      <4>[    9.426863]  ? _raw_spin_unlock_irqrestore+0x4c/0x60
      <4>[    9.426870]  ? lockdep_hardirqs_on+0xe3/0x1b0
      <4>[    9.426915]  i915_pci_probe+0x29/0xa0 [i915]
      <4>[    9.426923]  pci_device_probe+0x9e/0x120
      <4>[    9.426930]  really_probe+0xea/0x3c0
      <4>[    9.426936]  driver_probe_device+0x10b/0x120
      <4>[    9.426942]  device_driver_attach+0x4a/0x50
      <4>[    9.426948]  __driver_attach+0x97/0x130
      <4>[    9.426954]  ? device_driver_attach+0x50/0x50
      <4>[    9.426960]  bus_for_each_dev+0x74/0xc0
      <4>[    9.426966]  bus_add_driver+0x13f/0x210
      <4>[    9.426971]  ? 0xffffffffa083b000
      <4>[    9.426976]  driver_register+0x56/0xe0
      <4>[    9.426982]  ? 0xffffffffa083b000
      <4>[    9.426987]  do_one_initcall+0x58/0x300
      <4>[    9.426994]  ? do_init_module+0x1d/0x1f6
      <4>[    9.427001]  ? rcu_read_lock_sched_held+0x6f/0x80
      <4>[    9.427007]  ? kmem_cache_alloc_trace+0x261/0x290
      <4>[    9.427014]  do_init_module+0x56/0x1f6
      <4>[    9.427020]  load_module+0x24d1/0x2990
      <4>[    9.427032]  ? __se_sys_finit_module+0xd3/0xf0
      <4>[    9.427037]  __se_sys_finit_module+0xd3/0xf0
      <4>[    9.427047]  do_syscall_64+0x55/0x1c0
      <4>[    9.427053]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
      <4>[    9.427059] RIP: 0033:0x7f73d5609839
      <4>[    9.427064] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48
      <4>[    9.427082] RSP: 002b:00007ffdf34477b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
      <4>[    9.427091] RAX: ffffffffffffffda RBX: 00005559fd5d7b40 RCX: 00007f73d5609839
      <4>[    9.427099] RDX: 0000000000000000 RSI: 00007f73d52e8145 RDI: 000000000000000f
      <4>[    9.427106] RBP: 00007f73d52e8145 R08: 0000000000000000 R09: 00007ffdf34478d0
      <4>[    9.427114] R10: 000000000000000f R11: 0000000000000246 R12: 0000000000000000
      <4>[    9.427121] R13: 00005559fd5c90f0 R14: 0000000000020000 R15: 00005559fd5d7b40
      <4>[    9.427131] Modules linked in: i915(+) mei_hdcp x86_pkg_temp_thermal coretemp snd_hda_intel crct10dif_pclmul crc32_pclmul snd_hda_codec snd_hwdep e1000e snd_hda_core ghash_clmulni_intel ptp snd_pcm cdc_ether usbnet mii pps_core mei_me mei prime_numbers btusb btrtl btbcm btintel bluetooth ecdh_generic ecc
      <4>[    9.427254] ---[ end trace af3eeb543bd66e66 ]---
      
      [1] http://patchwork.freedesktop.org/patch/msgid/20190528200655.11605-1-chris@chris-wilson.co.uk
      
      References: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6159/fi-icl-u2/pstore0-1517155098_Oops_1.log
      References: 1e40d4ae ("drm/i915/cnl: Implement WaProgramMgsrForCorrectSliceSpecificMmioReads")
      Fixes: 1ac159e2 ("drm/i915: Expand subslice mask")
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
      Cc: Manasi Navare <manasi.d.navare@intel.com>
      Cc: Michel Thierry <michel.thierry@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
      Cc: Oscar Mateo <oscar.mateo@intel.com>
      Cc: Stuart Summers <stuart.summers@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Yunwei Zhang <yunwei.zhang@intel.com>
      Acked-by: NDaniel Vetter <daniel@ffwll.ch>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190529082150.31526-1-jani.nikula@intel.com
      a10f361d
    • S
      drm/i915: Expand subslice mask · 1ac159e2
      Stuart Summers 提交于
      Currently, the subslice_mask runtime parameter is stored as an
      array of subslices per slice. Expand the subslice mask array to
      better match what is presented to userspace through the
      I915_QUERY_TOPOLOGY_INFO ioctl. The index into this array is
      then calculated:
        slice * subslice stride + subslice index / 8
      
      v2: fix spacing in set_sseu_info args
          use set_sseu_info to initialize sseu data when building
          device status in debugfs
          rename variables in intel_engine_types.h to avoid checkpatch
          warnings
      v3: update headers in intel_sseu.h
      v4: add const to some sseu_dev_info variables
          use sseu->eu_stride for EU stride calculations
      v5: address review comments from Tvrtko and Daniele
      v6: remove extra space in intel_sseu_get_subslices
          return the correct subslice enable in for_each_instdone
          add GEM_BUG_ON to ensure user doesn't pass invalid ss_mask size
          use printk formatted string for subslice mask
      v7: remove string.h header and rebase
      
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
      Acked-by: NLionel Landwerlin <lionel.g.landwerlin@intel.com>
      Reviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Signed-off-by: NStuart Summers <stuart.summers@intel.com>
      Signed-off-by: NManasi Navare <manasi.d.navare@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190524154022.13575-6-stuart.summers@intel.com
      1ac159e2
  8. 28 5月, 2019 3 次提交
  9. 22 5月, 2019 1 次提交
    • T
      drm/i915: Engine discovery query · c5d3e39c
      Tvrtko Ursulin 提交于
      Engine discovery query allows userspace to enumerate engines, probe their
      configuration features, all without needing to maintain the internal PCI
      ID based database.
      
      A new query for the generic i915 query ioctl is added named
      DRM_I915_QUERY_ENGINE_INFO, together with accompanying structure
      drm_i915_query_engine_info. The address of latter should be passed to the
      kernel in the query.data_ptr field, and should be large enough for the
      kernel to fill out all known engines as struct drm_i915_engine_info
      elements trailing the query.
      
      As with other queries, setting the item query length to zero allows
      userspace to query minimum required buffer size.
      
      Enumerated engines have common type mask which can be used to query all
      hardware engines, versus engines userspace can submit to using the execbuf
      uAPI.
      
      Engines also have capabilities which are per engine class namespace of
      bits describing features not present on all engine instances.
      
      v2:
       * Fixed HEVC assignment.
       * Reorder some fields, rename type to flags, increase width. (Lionel)
       * No need to allocate temporary storage if we do it engine by engine.
         (Lionel)
      
      v3:
       * Describe engine flags and mark mbz fields. (Lionel)
       * HEVC only applies to VCS.
      
      v4:
       * Squash SFC flag into main patch.
       * Tidy some comments.
      
      v5:
       * Add uabi_ prefix to engine capabilities. (Chris Wilson)
       * Report exact size of engine info array. (Chris Wilson)
       * Drop the engine flags. (Joonas Lahtinen)
       * Added some more reserved fields.
       * Move flags after class/instance.
      
      v6:
       * Do not check engine info array was zeroed by userspace but zero the
         unused fields for them instead.
      
      v7:
       * Simplify length calculation loop. (Lionel Landwerlin)
      
      v8:
       * Remove MBZ comments where not applicable.
       * Rename ABI flags to match engine class define naming.
       * Rename SFC ABI flag to reflect it applies to VCS and VECS.
       * SFC is wired to even _logical_ engine instances.
       * SFC applies to VCS and VECS.
       * HEVC is present on all instances on Gen11. (Tony)
       * Simplify length calculation even more. (Chris Wilson)
       * Move info_ptr assigment closer to loop for clarity. (Chris Wilson)
       * Use vdbox_sfc_access from runtime info.
       * Rebase for RUNTIME_INFO.
       * Refactor for lower indentation.
       * Rename uAPI class/instance to engine_class/instance to avoid C++
         keyword.
      
      v9:
       * Rebase for s/num_rings/num_engines/ in RUNTIME_INFO.
      
      v10:
       * Use new copy_query_item.
      
      v11:
       * Consolidate with struct i915_engine_class_instnace.
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Jon Bloomfield <jon.bloomfield@intel.com>
      Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
      Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Tony Ye <tony.ye@intel.com>
      Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> # v7
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190522090054.6007-1-tvrtko.ursulin@linux.intel.com
      c5d3e39c
  10. 08 5月, 2019 1 次提交
  11. 07 5月, 2019 1 次提交
  12. 03 5月, 2019 1 次提交
  13. 02 5月, 2019 1 次提交
  14. 01 5月, 2019 1 次提交
  15. 27 4月, 2019 3 次提交
  16. 26 4月, 2019 2 次提交
    • C
      drm/i915: Enable render context support for gen4 (Broadwater to Cantiga) · 9ce9bdb0
      Chris Wilson 提交于
      Broadwater and the rest of gen4  do support being able to saving and
      reloading context specific registers between contexts, providing isolation
      of the basic GPU state (as programmable by userspace). This allows
      userspace to assume that the GPU retains their state from one batch to the
      next, minimising the amount of state it needs to reload and manually save
      across batches.
      
      v2: CONSTANT_BUFFER woes
      
      Running through piglit turned up an interesting issue, a GPU hang inside
      the context load. The context image includes the CONSTANT_BUFFER command
      that loads an address into a on-gpu buffer, and the context load was
      executing that immediately. However, since it was reading from the GTT
      there is no guarantee that the GTT retains the same configuration as
      when the context was saved, resulting in stray reads and a GPU hang.
      
      Having tried issuing a CONSTANT_BUFFER (to disable the command) from the
      ring before saving the context to no avail, we resort to patching out
      the instruction inside the context image before loading.
      
      This does impose that gen4 always reissues CONSTANT_BUFFER commands on
      each batch, but due to the use of a shared GTT that was and will remain
      a requirement.
      
      v3: ECOSKPD to the rescue
      
      Ville found the magic bit in the ECOSKPD to disable saving and restoring
      the CONSTANT_BUFFER from the context image, thereby completely avoiding
      the GPU hangs from chasing invalid pointers. This appears to be the
      default behaviour for gen5, and so we just need to tweak gen4 to match.
      
      v4: Fix spelling of ECOSKPD and discover it already exists
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NKenneth Graunke <kenneth@whitecape.org>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190419172720.5462-1-chris@chris-wilson.co.uk
      9ce9bdb0
    • C
      drm/i915: Enable render context support for Ironlake (gen5) · 1215d28e
      Chris Wilson 提交于
      Ironlake does support being able to saving and reloading context specific
      registers between contexts, providing isolation of the basic GPU state
      (as programmable by userspace). This allows userspace to assume that the
      GPU retains their state from one batch to the next, minimising the
      amount of state it needs to reload, or manually save and restore.
      
      v2: Fix off-by-one in reading CXT_SIZE, and add a comment that the
      CXT_SIZE and context-layout do not match in bspec, but the difference is
      irrelevant as we overallocate the full page anyway (Ville).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190419111749.3910-2-chris@chris-wilson.co.uk
      1215d28e
  17. 25 4月, 2019 2 次提交
    • C
      drm/i915: Invert the GEM wakeref hierarchy · 79ffac85
      Chris Wilson 提交于
      In the current scheme, on submitting a request we take a single global
      GEM wakeref, which trickles down to wake up all GT power domains. This
      is undesirable as we would like to be able to localise our power
      management to the available power domains and to remove the global GEM
      operations from the heart of the driver. (The intent there is to push
      global GEM decisions to the boundary as used by the GEM user interface.)
      
      Now during request construction, each request is responsible via its
      logical context to acquire a wakeref on each power domain it intends to
      utilize. Currently, each request takes a wakeref on the engine(s) and
      the engines themselves take a chipset wakeref. This gives us a
      transition on each engine which we can extend if we want to insert more
      powermangement control (such as soft rc6). The global GEM operations
      that currently require a struct_mutex are reduced to listening to pm
      events from the chipset GT wakeref. As we reduce the struct_mutex
      requirement, these listeners should evaporate.
      
      Perhaps the biggest immediate change is that this removes the
      struct_mutex requirement around GT power management, allowing us greater
      flexibility in request construction. Another important knock-on effect,
      is that by tracking engine usage, we can insert a switch back to the
      kernel context on that engine immediately, avoiding any extra delay or
      inserting global synchronisation barriers. This makes tracking when an
      engine and its associated contexts are idle much easier -- important for
      when we forgo our assumed execution ordering and need idle barriers to
      unpin used contexts. In the process, it means we remove a large chunk of
      code whose only purpose was to switch back to the kernel context.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Imre Deak <imre.deak@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-5-chris@chris-wilson.co.uk
      79ffac85
    • C
      drm/i915: Move GraphicsTechnology files under gt/ · 112ed2d3
      Chris Wilson 提交于
      Start partitioning off the code that talks to the hardware (GT) from the
      uapi layers and move the device facing code under gt/
      
      One casualty is s/intel_ringbuffer.h/intel_engine.h/ with the plan to
      subdivide that header and body further (and split out the submission
      code from the ringbuffer and logical context handling). This patch aims
      to be simple motion so git can fixup inflight patches with little mess.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Acked-by: NJani Nikula <jani.nikula@intel.com>
      Acked-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190424174839.7141-1-chris@chris-wilson.co.uk
      112ed2d3
  18. 24 4月, 2019 1 次提交
  19. 11 4月, 2019 2 次提交
  20. 27 3月, 2019 3 次提交
  21. 22 3月, 2019 1 次提交
    • C
      drm/i915: Flush pages on acquisition · a679f58d
      Chris Wilson 提交于
      When we return pages to the system, we ensure that they are marked as
      being in the CPU domain since any external access is uncontrolled and we
      must assume the worst. This means that we need to always flush the pages
      on acquisition if we need to use them on the GPU, and from the beginning
      have used set-domain. Set-domain is overkill for the purpose as it is a
      general synchronisation barrier, but our intent is to only flush the
      pages being swapped in. If we move that flush into the pages acquisition
      phase, we know then that when we have obj->mm.pages, they are coherent
      with the GPU and need only maintain that status without resorting to
      heavy handed use of set-domain.
      
      The principle knock-on effect for userspace is through mmap-gtt
      pagefaulting. Our uAPI has always implied that the GTT mmap was async
      (especially as when any pagefault occurs is unpredicatable to userspace)
      and so userspace had to apply explicit domain control itself
      (set-domain). However, swapping is transparent to the kernel, and so on
      first fault we need to acquire the pages and make them coherent for
      access through the GTT. Our use of set-domain here leaks into the uABI
      that the first pagefault was synchronous. This is unintentional and
      baring a few igt should be unoticed, nevertheless we bump the uABI
      version for mmap-gtt to reflect the change in behaviour.
      
      Another implication of the change is that gem_create() is presumed to
      create an object that is coherent with the CPU and is in the CPU write
      domain, so a set-domain(CPU) following a gem_create() would be a minor
      operation that merely checked whether we could allocate all pages for
      the object. On applying this change, a set-domain(CPU) causes a clflush
      as we acquire the pages. This will have a small impact on mesa as we move
      the clflush here on !llc from execbuf time to create, but that should
      have minimal performance impact as the same clflush exists but is now
      done early and because of the clflush issue, userspace recycles bo and
      so should resist allocating fresh objects.
      
      Internally, the presumption that objects are created in the CPU
      write-domain and remain so through writes to obj->mm.mapping is more
      prevalent than I expected; but easy enough to catch and apply a manual
      flush.
      
      For the future, we should push the page flush from the central
      set_pages() into the callers so that we can more finely control when it
      is applied, but for now doing it one location is easier to validate, at
      the cost of sometimes flushing when there is no need.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Matthew Auld <matthew.william.auld@gmail.com>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Antonio Argenziano <antonio.argenziano@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20190321161908.8007-1-chris@chris-wilson.co.uk
      a679f58d
  22. 21 3月, 2019 2 次提交
  23. 08 3月, 2019 1 次提交