- 31 10月, 2022 1 次提交
-
-
由 Robert Beckett 提交于
swiotlb_max_segment used to return either the maximum size that swiotlb could bounce, or for Xen PV PAGE_SIZE even if swiotlb could bounce buffer larger mappings. This made i915 on Xen PV work as it bypasses the coherency aspect of the DMA API and can't cope with bounce buffering and this avoided bounce buffering for the Xen/PV case. So instead of adding this hack back, check for Xen/PV directly in i915 for the Xen case and otherwise use the proper DMA API helper to query the maximum mapping size. Replace swiotlb_max_segment() calls with dma_max_mapping_size(). In i915_gem_object_get_pages_internal() no longer consider max_segment only if CONFIG_SWIOTLB is enabled. There can be other (iommu related) causes of specific max segment sizes. Fixes: a2daa27c ("swiotlb: simplify swiotlb_max_segment") Reported-by: NMarek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: NRobert Beckett <bob.beckett@collabora.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> [hch: added the Xen hack, rewrote the changelog] Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221020110308.1582518-1-hch@lst.de (cherry picked from commit 78a07fe7) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 28 7月, 2022 1 次提交
-
-
由 Chris Wilson 提交于
We report object allocation failures to userspace with ENOMEM, yet we still show the memory warning after failing to shrink device allocated pages. While this warning is similar to other system page allocation failures, it is superfluous to the ENOMEM provided directly to userspace. v2: Add NOWARN in few more places from where we might return ENOMEM to userspace. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4936Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Co-developed-by: NNirmoy Das <nirmoy.das@intel.com> Signed-off-by: NNirmoy Das <nirmoy.das@intel.com> Reviewed-by: NAndi Shyti <andi.shyti@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220727174023.16766-1-nirmoy.das@intel.com
-
- 09 5月, 2022 2 次提交
-
-
由 Tvrtko Ursulin 提交于
If i915 does not want to use huge pages there is a) no point in setting up the private mount and b) should former fail, it is misleading to log THP support is disabled in the caller, which does not even know if callee tried to enable it. Fix both by restructuring the flow in i915_gemfs_init and at the same time note the failure to set it up in all cases. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Eero Tamminen <eero.t.tamminen@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220429100414.647857-2-tvrtko.ursulin@linux.intel.com
-
由 Matthew Wilcox (Oracle) 提交于
pagecache_write_begin() and pagecache_write_end() are now trivial wrappers, so call the aops directly. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
- 17 3月, 2022 2 次提交
-
-
由 Jani Nikula 提交于
Move i915_gem_object_needs_bit17_swizzle() to i915_gem_tiling.[ch] as a i915_gem_object function related to tiling. Also un-inline while at it; does not seem like this is a function needed in hot paths. v2: i915_gem_tiling.[ch] instead of intel_ggtt_fencing.[ch] (Chris) Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220316095018.137998-1-jani.nikula@intel.com
-
由 Matthew Auld 提交于
Add a generic interface for allocating an object at some specific offset, and convert stolen over. Later we will want to hook this up to different backends. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NNirmoy Das <nirmoy.das@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-4-matthew.auld@intel.com
-
- 28 2月, 2022 1 次提交
-
-
由 Matthew Auld 提交于
With small LMEM-BAR we need to be able to differentiate between the total size of LMEM, and how much of it is CPU mappable. The end goal is to be able to utilize the entire range, even if part of is it not CPU accessible. v2: also update intelfb_create Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: NNirmoy Das <nirmoy.das@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220225145502.331818-1-matthew.auld@intel.com
-
- 14 2月, 2022 2 次提交
-
-
由 Jani Nikula 提交于
Don't include shmem_fs.h in i915_drv.h, reducing the build dependencies. Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Acked-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/44eade17f7ba1480d67c584466eeea3553f31506.1644507885.git.jani.nikula@intel.com
-
由 Jani Nikula 提交于
Include it only in files that use it. Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Acked-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/14edab4a193ea3f73f387a88e3836c8555401871.1644507885.git.jani.nikula@intel.com
-
- 10 1月, 2022 2 次提交
-
-
由 Matthew Auld 提交于
Add some proper flags for the different modes, and shorten the name to something more snappy. Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211215110746.865-2-matthew.auld@intel.com
-
由 Matthew Auld 提交于
Ditch the writeback hook and drop i915_gem_object_writeback(). We already support the shrinker_release_pages hook which can just call shmem_writeback directly. Suggested-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211215110746.865-1-matthew.auld@intel.com
-
- 25 11月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
There is an interesting refcounting loop: struct intel_memory_region has a struct ttm_resource_manager, ttm_resource_manager->move may hold a reference to i915_request, i915_request may hold a reference to intel_context, intel_context may hold a reference to drm_i915_gem_object, drm_i915_gem_object may hold a reference to intel_memory_region. Break this loop by dropping region reference counting. In addition, Have regions with a manager moving fence make sure that all region objects are released before freeing the region. v6: - Fix a code comment. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-4-thomas.hellstrom@linux.intel.com
-
- 02 11月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
As we start to introduce asynchronous failsafe object migration, where we update the object state and then submit asynchronous commands we need to record what memory resources are actually used by various part of the command stream. Initially for three purposes: 1) Error capture. 2) Asynchronous migration error recovery. 3) Asynchronous vma bind. At the time where these happens, the object state may have been updated to be several migrations ahead and object sg-tables discarded. In order to make it possible to keep sg-tables with memory resource information for these operations, introduce refcounted sg-tables that aren't freed until the last user is done with them. The alternative would be to reference information sitting on the corresponding ttm_resources which typically have the same lifetime as these refcountes sg_tables, but that leads to other awkward constructs: Due to the design direction chosen for ttm resource managers that would lead to diamond-style inheritance, the LMEM resources may sometimes be prematurely freed, and finally the subclassed struct ttm_resource would have to bleed into the asynchronous vma bind code. v3: - Address a number of style issues (Matthew Auld) v4: - Dont check for st->sgl being NULL in i915_ttm_tt__shmem_unpopulate(), that should never happen. (Matthew Auld) v5: - Fix a Potential double-free (Matthew Auld) Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211101122444.114607-1-thomas.hellstrom@linux.intel.com
-
- 22 10月, 2021 2 次提交
-
-
由 Matthew Auld 提交于
For cached objects we can allocate our pages directly in shmem. This should make it possible(in a later patch) to utilise the existing i915-gem shrinker code for such objects. For now this is still disabled. v2(Thomas): - Add optional try_to_writeback hook for objects. Importantly we need to check if the object is even still shrinkable; in between us dropping the shrinker LRU lock and acquiring the object lock it could for example have been moved. Also we need to differentiate between "lazy" shrinking and the immediate writeback mode. Also later we need to handle objects which don't even have mm.pages, so bundling this into put_pages() would require somehow handling that edge case, hence just letting the ttm backend handle everything in try_to_writeback doesn't seem too bad. v3(Thomas): - Likely a bad idea to touch the object from the unpopulate hook, since it's not possible to hold a reference, without also creating circular dependency, so likely this is too fragile. For now just ensure we at least mark the pages as dirty/accessed when called from the shrinker on WILLNEED objects. - s/try_to_writeback/shrinker_release_pages, since this can do more than just writeback. - Get rid of do_backup boolean and just set the SWAPPED flag prior to calling unpopulate. - Keep shmem_tt as lowest priority for the TTM LRU bo_swapout walk, since these just get skipped anyway. We can try to come up with something better later. v4(Thomas): - s/PCI_DMA/DMA/. Also drop NO_KERNEL_MAPPING and NO_WARN, which apparently doesn't do anything with streaming mappings. - Just pass along the error for ->truncate, and assume nothing. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Christian König <christian.koenig@amd.com> Cc: Oak Zeng <oak.zeng@intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Acked-by: NOak Zeng <oak.zeng@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018091055.1998191-2-matthew.auld@intel.com
-
由 Thomas Hellström 提交于
Break out some shmem backend utils for future reuse by the TTM backend: shmem_alloc_st(), shmem_free_st() and __shmem_writeback() which we can use to provide a shmem-backed TTM page pool for cached-only TTM buffer objects. Main functional change here is that we now compute the page sizes using the dma segments rather than using the physical page address segments. v2(Reported-by: kernel test robot <lkp@intel.com>) - Make sure we initialise the mapping on the error path in shmem_get_pages() Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018091055.1998191-1-matthew.auld@intel.com
-
- 20 10月, 2021 2 次提交
-
-
由 Matthew Auld 提交于
On non-LLC platforms, force the flush-on-acquire if this is ever swapped-in. Our async flush path is not trust worthy enough yet(and happens in the wrong order), and with some tricks it's conceivable for userspace to change the cache-level to I915_CACHE_NONE after the pages are swapped-in, and since execbuf binds the object before doing the async flush, there is a potential race window. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018174508.2137279-6-matthew.auld@intel.com
-
由 Matthew Auld 提交于
It looks like we will need this in some more places, so extract as a helper. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211018174508.2137279-3-matthew.auld@intel.com
-
- 27 7月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
EHL and JSL add the 'Bypass LLC' MOCS entry, which should make it possible for userspace to bypass the GTT caching bits set by the kernel, as per the given object cache_level. This is troublesome since the heavy flush we apply when first acquiring the pages is skipped if the kernel thinks the object is coherent with the GPU. As a result it might be possible to bypass the cache and read the contents of the page directly, which could be stale data. If it's just a case of userspace shooting themselves in the foot then so be it, but since i915 takes the stance of always zeroing memory before handing it to userspace, we need to prevent this. v2: this time actually set cache_dirty in put_pages() v3: move to get_pages() which looks simpler BSpec: 34007 References: 04609175 ("Revert "drm/i915/ehl: Update MOCS table for EHL"") Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Cc: Francisco Jerez <francisco.jerez.plata@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Chris Wilson <chris.p.wilson@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210723105045.400841-2-matthew.auld@intel.com
-
- 30 6月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
For some specialised objects we might need something larger than the regions min_page_size due to some hw restriction, and slightly more hairy is needing something smaller with the guarantee that such objects will never be inserted into any GTT, which is the case for the paging structures. This also fixes how we setup the BO page_alignment, if we later migrate the object somewhere else. For example if the placements are {SMEM, LMEM}, then we might get this wrong. Pushing the min_page_size behaviour into the manager should fix this. v2(Thomas): push the default page size behaviour into buddy_man, and let the user override it with the page-alignment, which looks cleaner v3: rebase on ttm sys changes Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210625103824.558481-1-matthew.auld@intel.com
-
- 25 6月, 2021 2 次提交
-
-
由 Thomas Hellström 提交于
For discrete, use TTM for both cached and WC system memory. That means we currently rely on the TTM memory accounting / shrinker. For cached system memory we should consider remaining shmem-backed, which can be implemented from our ttm_tt_populate callback. We can then also reuse our own very elaborate shrinker for that memory. If an object is evicted to a gem allowable region, we will now consider the object migrated, and we flip the gem region and move the object to a different region list. Since we are now changing gem regions, we can't any longer rely on the CONTIGUOUS flag being set based on the region min page size, so remove that flag update. If we want to reintroduce it, we need to put it in the mutable flags. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210624084240.270219-4-thomas.hellstrom@linux.intel.com
-
由 Thomas Hellström 提交于
The object ops i915_GEM_OBJECT_HAS_IOMEM and the object I915_BO_ALLOC_STRUCT_PAGE flags are considered immutable by much of our code. Introduce a new mem_flags member to hold these and make sure checks for these flags being set are either done under the object lock or with pages properly pinned. The flags will change during migration under the object lock. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210624084240.270219-2-thomas.hellstrom@linux.intel.com
-
- 02 6月, 2021 1 次提交
-
-
由 Thomas Hellström 提交于
Temporarily remove the buddy allocator and related selftests and hook up the TTM range manager for i915 regions. Also modify the mock region selftests somewhat to account for a fragmenting manager. Signed-off-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210602083818.241793-2-thomas.hellstrom@linux.intel.com
-
- 25 3月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
With all callers and selftests fixed to use ww locking, we can now finally remove this lock. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-62-maarten.lankhorst@linux.intel.com
-
- 24 3月, 2021 3 次提交
-
-
由 Maarten Lankhorst 提交于
Simple adding of i915_gem_object_lock, we may start to pass ww to get_pages() in the future, but that won't be the case here; We override shmem's get_pages() handling by calling i915_gem_object_get_pages_phys(), no ww is needed. Changes since v1: - Call shmem put pages directly, the callback would go down the phys free path. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-10-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
Instead of creating a separate object type, we make changes to the shmem type, to clear struct page backing. This will allow us to ensure we never run into a race when we exchange obj->ops with other function pointers. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-9-maarten.lankhorst@linux.intel.com
-
由 Maarten Lankhorst 提交于
We want to remove the changing of ops structure for attaching phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags, and put it in the bo. This will remove a potential race of dereferencing the wrong obj->ops without ww mutex held. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> [danvet: apply with wiggle] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-8-maarten.lankhorst@linux.intel.com
-
- 02 2月, 2021 1 次提交
-
-
由 Thomas Zimmermann 提交于
Using struct drm_device.pdev is deprecated. Convert i915 to struct drm_device.dev. No functional changes. v6: * also remove assignment in selftests/ in a later patch (Chris) v5: * remove assignment in later patch (Chris) v3: * rebased v2: * move gt/ and gvt/ changes into separate patches Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210128133127.2311-2-tzimmermann@suse.de
-
- 21 1月, 2021 1 次提交
-
-
由 Chris Wilson 提交于
The obj->stolen is currently used to identify an object allocated from stolen memory. This dates back to when there were just 1.5 types of objects, an object backed by shmemfs and an object backed by shmemfs with a contiguous physical address. Now that we have several different types of objects, we no longer want to treat stolen objects as a special case. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-3-chris@chris-wilson.co.uk
-
- 15 1月, 2021 1 次提交
-
-
由 Matthew Auld 提交于
Give more flexibility to the caller, if they already have an allocated object, in case they wish to apply some transformation to the object prior to handing it over to the region specific initialisation step, like in gem_create_ext where we would like to first apply the extensions to the object. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210114182402.840247-3-matthew.auld@intel.com
-
- 14 10月, 2020 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
i915 does not want to see value entries. Switch it to use find_lock_page() instead, and remove the export of find_lock_entry(). Move find_lock_entry() and find_get_entry() to mm/internal.h to discourage any future use. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Huang Ying <ying.huang@intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Cc: William Kucharski <william.kucharski@oracle.com> Link: https://lkml.kernel.org/r/20200910183318.20139-6-willy@infradead.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Name the object classes and their offspring for easier lockdep debugging. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200529183204.16850-2-chris@chris-wilson.co.uk
-
- 25 5月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Leave the error propagation in place, but limit the warnings to only show up in CI if the unlikely errors are hit. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200525141957.3061-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Our __sgt_iter assumes that the scattergather list has at least one element. But during construction we may fail in allocating the first page, and so mark the first element as the terminator. This is unexpected! [22555.524752] RIP: 0010:shmem_get_pages+0x506/0x710 [i915] [22555.524759] Code: 49 8b 2c 24 31 c0 66 89 44 24 40 48 85 ed 0f 84 62 01 00 00 4c 8b 75 00 8b 5d 08 44 8b 7d 0c 48 8b 0d 7e 34 07 e2 49 83 e6 fc <49> 8b 16 41 01 df 48 89 cf 48 89 d0 48 c1 e8 2d 48 85 c9 0f 84 c8 [22555.524765] RSP: 0018:ffffc9000053f9d0 EFLAGS: 00010246 [22555.524770] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff8881ffffa000 [22555.524774] RDX: fffffffffffffff4 RSI: ffffffffffffffff RDI: ffffffff821efe00 [22555.524778] RBP: ffff8881b099ab00 R08: 0000000000000000 R09: 00000000fffffff4 [22555.524782] R10: 0000000000000002 R11: 00000000ffec0a02 R12: ffff8881cd3c8d60 [22555.524786] R13: 00000000fffffff4 R14: 0000000000000000 R15: 0000000000000000 [22555.524790] FS: 00007f4fbeb9b9c0(0000) GS:ffff8881f8580000(0000) knlGS:0000000000000000 [22555.524795] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [22555.524799] CR2: 0000000000000000 CR3: 00000001ec7f0004 CR4: 00000000001606e0 [22555.524803] Call Trace: [22555.524919] __i915_gem_object_get_pages+0x4f/0x60 [i915] Fixes: 85d1225e ("drm/i915: Introduce & use new lightweight SGL iterators") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: <stable@vger.kernel.org> # v4.8+ Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NMaciej Patelczyk <maciej.patelczyk@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200522132706.5133-1-chris@chris-wilson.co.uk (cherry picked from commit 957ad9a0) Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 22 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Our __sgt_iter assumes that the scattergather list has at least one element. But during construction we may fail in allocating the first page, and so mark the first element as the terminator. This is unexpected! [22555.524752] RIP: 0010:shmem_get_pages+0x506/0x710 [i915] [22555.524759] Code: 49 8b 2c 24 31 c0 66 89 44 24 40 48 85 ed 0f 84 62 01 00 00 4c 8b 75 00 8b 5d 08 44 8b 7d 0c 48 8b 0d 7e 34 07 e2 49 83 e6 fc <49> 8b 16 41 01 df 48 89 cf 48 89 d0 48 c1 e8 2d 48 85 c9 0f 84 c8 [22555.524765] RSP: 0018:ffffc9000053f9d0 EFLAGS: 00010246 [22555.524770] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff8881ffffa000 [22555.524774] RDX: fffffffffffffff4 RSI: ffffffffffffffff RDI: ffffffff821efe00 [22555.524778] RBP: ffff8881b099ab00 R08: 0000000000000000 R09: 00000000fffffff4 [22555.524782] R10: 0000000000000002 R11: 00000000ffec0a02 R12: ffff8881cd3c8d60 [22555.524786] R13: 00000000fffffff4 R14: 0000000000000000 R15: 0000000000000000 [22555.524790] FS: 00007f4fbeb9b9c0(0000) GS:ffff8881f8580000(0000) knlGS:0000000000000000 [22555.524795] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [22555.524799] CR2: 0000000000000000 CR3: 00000001ec7f0004 CR4: 00000000001606e0 [22555.524803] Call Trace: [22555.524919] __i915_gem_object_get_pages+0x4f/0x60 [i915] Fixes: 85d1225e ("drm/i915: Introduce & use new lightweight SGL iterators") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: <stable@vger.kernel.org> # v4.8+ Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NMaciej Patelczyk <maciej.patelczyk@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200522132706.5133-1-chris@chris-wilson.co.uk
-
- 22 1月, 2020 1 次提交
-
-
由 Pankaj Bharadiya 提交于
drm specific WARN* calls include device information in the backtrace, so we know what device the warnings originate from. Covert all the calls of WARN* with device specific drm_WARN* variants in functions where drm_i915_private struct pointer is readily available. The conversion was done automatically with below coccinelle semantic patch. checkpatch errors/warnings are fixed manually. @rule1@ identifier func, T; @@ func(...) { ... struct drm_i915_private *T = ...; <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } @rule2@ identifier func, T; @@ func(struct drm_i915_private *T,...) { <+... ( -WARN( +drm_WARN(&T->drm, ...) | -WARN_ON( +drm_WARN_ON(&T->drm, ...) | -WARN_ONCE( +drm_WARN_ONCE(&T->drm, ...) | -WARN_ON_ONCE( +drm_WARN_ON_ONCE(&T->drm, ...) ) ...+> } command: spatch --sp-file <script> --dir drivers/gpu/drm/i915/gem \ --linux-spacing --in-place Signed-off-by: NPankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200115034455.17658-6-pankaj.laxminarayan.bharadiya@intel.com
-
- 29 12月, 2019 1 次提交
-
-
由 Lukasz Fiedorowicz 提交于
Debugfs i915_gem_object is extended to enable the IGTs to detect the LMEM's availability and the total size of LMEM. v2: READ_ONCE is used [Chris] v3: %pa is used for printing the resource [Chris] v4: All regions' details added to debugfs [Chris] v5: Macro for_each_mem_region added name is initialized at region init [Chris] Signed-off-by: NLukasz Fiedorowicz <lukasz.fiedorowicz@intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NStuart Summers <stuart.summers@intel.com> Signed-off-by: NRamalingam C <ramalingam.c@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191227133748.4330-1-ramalingam.c@intel.com
-
- 22 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Separate each object class into a separate lock type to avoid lockdep cross-contamination between paths (i.e. userptr!). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191022144501.26486-1-chris@chris-wilson.co.uk
-
- 18 10月, 2019 1 次提交
-
-
由 Matthew Auld 提交于
Convert shmem to an intel_memory_region. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191018090751.28295-2-matthew.auld@intel.com
-
- 07 8月, 2019 1 次提交
-
-
由 Jani Nikula 提交于
Disentangle i915_drv.h from intel_drv.h, which gets included via i915_trace.h. This necessitates including i915_trace.h wherever it's needed. Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/ed82bf259d3b725a1a1a3c3e9d6fb5c08bc4d489.1565085691.git.jani.nikula@intel.com
-
- 04 7月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Since reservation_object_fini() does an immediate free, rather than kfree_rcu as normal, we have to delay the release until after the RCU grace period has elapsed (i.e. from the rcu cleanup callback) so that we can rely on the RCU protected access to the fences while the object is a zombie. i915_gem_busy_ioctl relies on having an RCU barrier to protect the reservation in order to avoid having to take a reference and strong memory barriers. v2: Order is important; only release after putting the pages! Fixes: c03467ba ("drm/i915/gem: Free pages before rcu-freeing the object") Testcase: igt/gem_busy/close-race Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190703180601.10950-1-chris@chris-wilson.co.uk
-