1. 25 4月, 2020 1 次提交
  2. 08 4月, 2020 1 次提交
  3. 18 2月, 2020 1 次提交
  4. 17 2月, 2020 1 次提交
  5. 16 2月, 2020 1 次提交
  6. 14 2月, 2020 1 次提交
  7. 11 2月, 2020 1 次提交
  8. 30 1月, 2020 1 次提交
  9. 29 1月, 2020 1 次提交
  10. 28 1月, 2020 1 次提交
  11. 22 1月, 2020 1 次提交
  12. 10 1月, 2020 4 次提交
  13. 23 12月, 2019 1 次提交
  14. 20 12月, 2019 1 次提交
  15. 09 12月, 2019 1 次提交
  16. 25 11月, 2019 1 次提交
  17. 18 11月, 2019 1 次提交
  18. 14 11月, 2019 1 次提交
    • B
      drm/i915: Avoid atomic context for error capture · 48715f70
      Bruce Chang 提交于
      io_mapping_map_atomic/kmap_atomic are occasionally taken in error capture
      (if there is no aperture preallocated for the use of error capture), but
      the error capture and compression routines are now run in normal
      context:
      
      <3> [113.316247] BUG: sleeping function called from invalid context at mm/page_alloc.c:4653
      <3> [113.318190] in_atomic(): 1, irqs_disabled(): 0, pid: 678, name: debugfs_test
      <4> [113.319900] no locks held by debugfs_test/678.
      <3> [113.321002] Preemption disabled at:
      <4> [113.321130] [<ffffffffa02506d4>] i915_error_object_create+0x494/0x610 [i915]
      <4> [113.327259] Call Trace:
      <4> [113.327871] dump_stack+0x67/0x9b
      <4> [113.328683] ___might_sleep+0x167/0x250
      <4> [113.329618] __alloc_pages_nodemask+0x26b/0x1110
      <4> [113.334614] pool_alloc.constprop.19+0x14/0x60 [i915]
      <4> [113.335951] compress_page+0x7c/0x100 [i915]
      <4> [113.337110] i915_error_object_create+0x4bd/0x610 [i915]
      <4> [113.338515] i915_capture_gpu_state+0x384/0x1680 [i915]
      
      However, it is not a good idea to run the slow compression inside atomic
      context, so we choose not to.
      
      Fixes: 895d8ebe ("drm/i915: error capture with no ggtt slot")
      Signed-off-by: NBruce Chang <yu.bruce.chang@intel.com>
      Reviewed-by: NBrian Welty <brian.welty@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191113231104.24208-1-yu.bruce.chang@intel.com
      48715f70
  19. 07 11月, 2019 1 次提交
  20. 30 10月, 2019 2 次提交
  21. 29 10月, 2019 1 次提交
  22. 26 10月, 2019 1 次提交
  23. 24 10月, 2019 1 次提交
  24. 04 10月, 2019 2 次提交
    • C
      drm/i915: Remove logical HW ID · 2935ed53
      Chris Wilson 提交于
      With the introduction of ctx->engines[] we allow multiple logical
      contexts to be used on the same engine (e.g. with virtual engines).
      According to bspec, aach logical context requires a unique tag in order
      for context-switching to occur correctly between them. [Simple
      experiments show that it is not so easy to trick the HW into performing
      a lite-restore with matching logical IDs, though my memory from early
      Broadwell experiments do suggest that it should be generating
      lite-restores.]
      
      We only need to keep a unique tag for the active lifetime of the
      context, and for as long as we need to identify that context. The HW
      uses the tag to determine if it should use a lite-restore (why not the
      LRCA?) and passes the tag back for various status identifies. The only
      status we need to track is for OA, so when using perf, we assign the
      specific context a unique tag.
      
      v2: Calculate required number of tags to fill ELSP.
      
      Fixes: 976b55f0 ("drm/i915: Allow a context to define its set of engines")
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111895Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-14-chris@chris-wilson.co.uk
      2935ed53
    • C
      drm/i915: Coordinate i915_active with its own mutex · b1e3177b
      Chris Wilson 提交于
      Forgo the struct_mutex serialisation for i915_active, and interpose its
      own mutex handling for active/retire.
      
      This is a multi-layered sleight-of-hand. First, we had to ensure that no
      active/retire callbacks accidentally inverted the mutex ordering rules,
      nor assumed that they were themselves serialised by struct_mutex. More
      challenging though, is the rule over updating elements of the active
      rbtree. Instead of the whole i915_active now being serialised by
      struct_mutex, allocations/rotations of the tree are serialised by the
      i915_active.mutex and individual nodes are serialised by the caller
      using the i915_timeline.mutex (we need to use nested spinlocks to
      interact with the dma_fence callback lists).
      
      The pain point here is that instead of a single mutex around execbuf, we
      now have to take a mutex for active tracker (one for each vma, context,
      etc) and a couple of spinlocks for each fence update. The improvement in
      fine grained locking allowing for multiple concurrent clients
      (eventually!) should be worth it in typical loads.
      
      v2: Add some comments that barely elucidate anything :(
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
      b1e3177b
  25. 10 9月, 2019 1 次提交
  26. 30 8月, 2019 1 次提交
  27. 24 8月, 2019 1 次提交
  28. 19 8月, 2019 1 次提交
  29. 15 8月, 2019 1 次提交
  30. 13 8月, 2019 1 次提交
  31. 09 8月, 2019 2 次提交
  32. 07 8月, 2019 1 次提交
  33. 31 7月, 2019 2 次提交