1. 24 8月, 2022 1 次提交
  2. 30 3月, 2022 1 次提交
  3. 23 3月, 2022 2 次提交
  4. 22 3月, 2022 1 次提交
  5. 03 3月, 2022 2 次提交
  6. 17 2月, 2022 1 次提交
  7. 14 2月, 2022 1 次提交
  8. 02 2月, 2022 1 次提交
  9. 11 1月, 2022 1 次提交
  10. 06 1月, 2022 1 次提交
  11. 18 12月, 2021 1 次提交
  12. 15 12月, 2021 1 次提交
    • M
      drm/i915/debugfs: add noreclaim annotations · 1b9e8b1f
      Matthew Auld 提交于
      We have a debugfs hook to directly call into i915_gem_shrink() with the
      fs_reclaim acquire annotations to simulate hitting direct reclaim.
      However we should also annotate this with memalloc_noreclaim, which will
      set PF_MEMALLOC for us on the current context, to ensure we can't
      re-enter direct reclaim(just like "real" direct reclaim does). This is
      an issue now that ttm_bo_validate could potentially be called here,
      which might try to allocate a tiny amount of memory to hold the new
      ttm_resource struct, as per the below splat:
      
      [ 2507.913844] WARNING: possible recursive locking detected
      [ 2507.913848] 5.16.0-rc4+ #5 Tainted: G U
      [ 2507.913853] --------------------------------------------
      [ 2507.913856] gem_exec_captur/1825 is trying to acquire lock:
      [ 2507.913861] ffffffffb9df2500 (fs_reclaim){..}-{0:0}, at: kmem_cache_alloc_trace+0x30/0x390
      [ 2507.913875]
      but task is already holding lock:
      [ 2507.913879] ffffffffb9df2500 (fs_reclaim){..}-{0:0}, at: i915_drop_caches_set+0x1c9/0x2c0 [i915]
      [ 2507.913962]
      other info that might help us debug this:
      [ 2507.913966] Possible unsafe locking scenario:
      
      [ 2507.913970] CPU0
      [ 2507.913973] ----
      [ 2507.913975] lock(fs_reclaim);
      [ 2507.913979] lock(fs_reclaim);
      [ 2507.913983]
      
                  DEADLOCK ***
      
      [ 2507.913988] May be due to missing lock nesting notation
      
      [ 2507.913992] 4 locks held by gem_exec_captur/1825:
      [ 2507.913997] #0: ffff888101f6e460 (sb_writers#17){..}-{0:0}, at: ksys_write+0xe9/0x1b0
      [ 2507.914009] #1: ffff88812d99e2b8 (&attr->mutex){..}-{3:3}, at: simple_attr_write+0xbb/0x220
      [ 2507.914019] #2: ffffffffb9df2500 (fs_reclaim){..}-{0:0}, at: i915_drop_caches_set+0x1c9/0x2c0 [i915]
      [ 2507.914085] #3: ffff8881b4a11b20 (reservation_ww_class_mutex){..}-{3:3}, at: ww_mutex_trylock+0x43f/0xcb0
      [ 2507.914097]
      stack backtrace:
      [ 2507.914102] CPU: 0 PID: 1825 Comm: gem_exec_captur Tainted: G U 5.16.0-rc4+ #5
      [ 2507.914109] Hardware name: ASUS System Product Name/PRIME B560M-A AC, BIOS 0403 01/26/2021
      [ 2507.914115] Call Trace:
      [ 2507.914118] <TASK>
      [ 2507.914121] dump_stack_lvl+0x59/0x73
      [ 2507.914128] __lock_acquire.cold+0x227/0x3b0
      [ 2507.914135] ? lockdep_hardirqs_on_prepare+0x410/0x410
      [ 2507.914141] ? __lock_acquire+0x23ca/0x5000
      [ 2507.914147] lock_acquire+0x19c/0x4b0
      [ 2507.914152] ? kmem_cache_alloc_trace+0x30/0x390
      [ 2507.914157] ? lock_release+0x690/0x690
      [ 2507.914163] ? lock_is_held_type+0xe4/0x140
      [ 2507.914170] ? ttm_sys_man_alloc+0x47/0xb0 [ttm]
      [ 2507.914178] fs_reclaim_acquire+0x11a/0x160
      [ 2507.914183] ? kmem_cache_alloc_trace+0x30/0x390
      [ 2507.914188] kmem_cache_alloc_trace+0x30/0x390
      [ 2507.914192] ? lock_release+0x37f/0x690
      [ 2507.914198] ttm_sys_man_alloc+0x47/0xb0 [ttm]
      [ 2507.914206] ttm_bo_pipeline_gutting+0x70/0x440 [ttm]
      [ 2507.914214] ? ttm_mem_io_free+0x150/0x150 [ttm]
      [ 2507.914221] ? lock_is_held_type+0xe4/0x140
      [ 2507.914227] ttm_bo_validate+0x2fb/0x370 [ttm]
      [ 2507.914234] ? lock_acquire+0x19c/0x4b0
      [ 2507.914239] ? ttm_bo_bounce_temp_buffer.constprop.0+0xf0/0xf0 [ttm]
      [ 2507.914246] ? lock_acquire+0x131/0x4b0
      [ 2507.914251] ? lock_is_held_type+0xe4/0x140
      [ 2507.914257] i915_ttm_shrinker_release_pages+0x2bc/0x490 [i915]
      [ 2507.914339] ? i915_ttm_swap_notify+0x130/0x130 [i915]
      [ 2507.914429] ? i915_gem_object_release_mmap_offset+0x32/0x250 [i915]
      [ 2507.914529] i915_gem_shrink+0xb14/0x1290 [i915]
      [ 2507.914616] ? ___i915_gem_object_make_shrinkable+0x3e0/0x3e0 [i915]
      [ 2507.914698] ? _raw_spin_unlock_irqrestore+0x2d/0x60
      [ 2507.914705] ? track_intel_runtime_pm_wakeref+0x180/0x230 [i915]
      [ 2507.914777] i915_gem_shrink_all+0x4b/0x70 [i915]
      [ 2507.914857] i915_drop_caches_set+0x227/0x2c0 [i915]
      Reported-by: NThomas Hellström <thomas.hellstrom@linux.intel.com>
      Signed-off-by: NMatthew Auld <matthew.auld@intel.com>
      Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20211213125530.3960007-1-matthew.auld@intel.com
      1b9e8b1f
  13. 09 12月, 2021 1 次提交
  14. 01 12月, 2021 1 次提交
    • T
      drm/i915: Use per device iommu check · cca08469
      Tvrtko Ursulin 提交于
      With both integrated and discrete Intel GPUs in a system, the current
      global check of intel_iommu_gfx_mapped, as done from intel_vtd_active()
      may not be completely accurate.
      
      In this patch we add i915 parameter to intel_vtd_active() in order to
      prepare it for multiple GPUs and we also change the check away from Intel
      specific intel_iommu_gfx_mapped (global exported by the Intel IOMMU
      driver) to probing the presence of IOMMU on a specific device using
      device_iommu_mapped().
      
      This will return true both for IOMMU pass-through and address translation
      modes which matches the current behaviour. If in the future we wanted to
      distinguish between these two modes we could either use
      iommu_get_domain_for_dev() and check for __IOMMU_DOMAIN_PAGING bit
      indicating address translation, or ask for a new API to be exported from
      the IOMMU core code.
      
      v2:
        * Check for dmar translation specifically, not just iommu domain. (Baolu)
      
      v3:
       * Go back to plain "any domain" check for now, rewrite commit message.
      
      v4:
       * Use device_iommu_mapped. (Robin, Baolu)
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Lu Baolu <baolu.lu@linux.intel.com>
      Cc: Lucas De Marchi <lucas.demarchi@intel.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com>
      Reviewed-by: NLucas De Marchi <lucas.demarchi@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20211126141424.493753-1-tvrtko.ursulin@linux.intel.com
      cca08469
  15. 15 10月, 2021 1 次提交
  16. 14 10月, 2021 1 次提交
  17. 22 9月, 2021 1 次提交
  18. 19 9月, 2021 1 次提交
    • L
      drm/i915: deduplicate frequency dump on debugfs · d0c56031
      Lucas De Marchi 提交于
      Although commit 9dd4b065 ("drm/i915/gt: Move pm debug files into a
      gt aware debugfs") says it was moving debug files to gt/, the
      i915_frequency_info file was left behind and its implementation copied
      into drivers/gpu/drm/i915/gt/debugfs_gt_pm.c. Over time we had several
      patches having to change both places to keep them in sync (and some
      patches failing to do so). The initial idea was to remove
      i915_frequency_info, but there are user space tools using it. From a
      quick code search there are other scripts and test tools besides igt, so
      it's not simply updating igt to get rid of the older file.
      
      Here we export a function using drm_printer as parameter and make
      both show() implementations to call this same function. Aside from a few
      variable name differences, for i915_frequency_info this brings a few
      lines that were not previously printed: RP UP EI, RP UP THRESHOLD, RP
      DOWN THRESHOLD and RP DOWN EI.  These came in as part of
      commit 9c878557 ("drm/i915/gt: Use the RPM config register to
      determine clk frequencies"), which didn't change both places.
      Signed-off-by: NLucas De Marchi <lucas.demarchi@intel.com>
      Acked-by: NJani Nikula <jani.nikula@intel.com>
      Reviewed-by: NMatt Roper <matthew.d.roper@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20210918025754.1254705-4-lucas.demarchi@intel.com
      d0c56031
  19. 20 8月, 2021 1 次提交
  20. 13 8月, 2021 1 次提交
  21. 31 7月, 2021 1 次提交
  22. 08 7月, 2021 1 次提交
  23. 07 6月, 2021 1 次提交
  24. 25 5月, 2021 1 次提交
  25. 27 4月, 2021 1 次提交
  26. 30 3月, 2021 2 次提交
  27. 25 3月, 2021 1 次提交
  28. 02 2月, 2021 1 次提交
  29. 21 1月, 2021 1 次提交
  30. 19 1月, 2021 1 次提交
  31. 31 12月, 2020 1 次提交
  32. 24 12月, 2020 1 次提交
  33. 18 12月, 2020 1 次提交
  34. 04 12月, 2020 1 次提交
  35. 03 12月, 2020 1 次提交
  36. 01 12月, 2020 2 次提交