- 24 3月, 2021 1 次提交
-
-
由 Maarten Lankhorst 提交于
We need to get rid of allocations in the cmd parser, because it needs to be called from a signaling context, first move all pinning to execbuf, where we already hold all locks. Allocate jump_whitelist in the execbuffer, and add annotations around intel_engine_cmd_parser(), to ensure we only call the command parser without allocating any memory, or taking any locks we're not supposed to. Because i915_gem_object_get_page() may also allocate memory, add a path to i915_gem_object_get_sg() that prevents memory allocations, and walk the sg list manually. It should be similarly fast. This has the added benefit of being able to catch all memory allocation errors before the point of no return, and return -ENOMEM safely to the execbuf submitter. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Acked-by: NThomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-4-maarten.lankhorst@linux.intel.com
-
- 02 2月, 2021 1 次提交
-
-
由 Chris Wilson 提交于
Simplify the frontbuffer unpin by removing the lock requirement. The LRU bumping was primarily to protect the GTT from being evicted and from frontbuffers being eagerly shrunk. Now we protect frontbuffers from the shrinker, and we avoid accidentally evicting from the GTT, so the benefit from bumping LRU is no more, and we can save more time by not. Reported-and-tested-by: NMatti Hämäläinen <ccr@tnsp.org> Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2905 Fixes: c1793ba8 ("drm/i915: Add ww locking to pin_to_display_plane, v2.") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-6-chris@chris-wilson.co.uk (cherry picked from commit 14ca83ee) Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: <stable@vger.kernel.org> # v5.10+ Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 22 1月, 2021 1 次提交
-
-
由 Imre Deak 提交于
Add a simple helper to read data with the CPU from the page of a GEM object. Do the read either via a kmap if the object has struct pages or an iomap otherwise. This is needed by the next patch, reading a u64 value from the object (w/o requiring the obj to be mapped to the GPU). Suggested by Chris. v2 (Chris): - Sanitize the type and order of func params. - Avoid consts requiring too many casts. - Use BUG_ON instead of WARN_ON, simplify the conditions. - Fix __iomem sparse errors. - Leave locking/syncing/pinning up to the caller, require only that the caller has pinned the object pages. - Check for iomem backing store before reading via an iomap. v3: - Fix offset passed to io_mapping_map_wc() missing a mem.region.start delta. (Chris, Matthew) Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210120213834.1435710-1-imre.deak@intel.com
-
- 21 1月, 2021 3 次提交
-
-
由 Chris Wilson 提交于
Simplify the frontbuffer unpin by removing the lock requirement. The LRU bumping was primarily to protect the GTT from being evicted and from frontbuffers being eagerly shrunk. Now we protect frontbuffers from the shrinker, and we avoid accidentally evicting from the GTT, so the benefit from bumping LRU is no more, and we can save more time by not. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-6-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Give obj->mm.quirked a name much more reflective of its purpose (i915_gem_object_has_tiling_quirk) and move it from the obj->mm field as it doesn't denote a quirk of the backing store, but a quirk in the object in its treatment of the backing pages, similar to tiling modes. Then instead of abusing the pinned status of the buffer to protect it from the shrinker, we can instead hide the buffer from the shrinker so it is never considered for being swapped. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The obj->stolen is currently used to identify an object allocated from stolen memory. This dates back to when there were just 1.5 types of objects, an object backed by shmemfs and an object backed by shmemfs with a contiguous physical address. Now that we have several different types of objects, we no longer want to treat stolen objects as a special case. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119214336.1463-3-chris@chris-wilson.co.uk
-
- 20 1月, 2021 3 次提交
-
-
由 Chris Wilson 提交于
Currently, if a modeset/pageflip needs to wait for render completion to an object, we boost the priority of that rendering above all other work. We can apply the same interactive priority boosting to explicit fences that we can unwrap into a native i915_request (i.e. sync_file). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119204454.10343-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
flush_write_domain() is only used within the GEM domain management code, so move it to i915_gem_domain.c and drop the export. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210119144912.12653-5-chris@chris-wilson.co.uk
-
由 Matthew Auld 提交于
Looks like it belongs there anyway, otherwise we have to include the entirety of i915_gem_object.h just to get at the enum. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20210119133106.66294-3-matthew.auld@intel.com
-
- 06 10月, 2020 1 次提交
-
-
由 Tvrtko Ursulin 提交于
As the previous patch fixed the places where we walk the whole scatterlist for DMA addresses, this patch fixes the random lookup functionality. To achieve this we have to add a second lookup iterator and add a i915_gem_object_get_sg_dma helper, to be used analoguous to existing i915_gem_object_get_sg_dma. Therefore two lookup caches are maintained per object and they are flushed at the same point for simplicity. (Strictly speaking the DMA cache should be flushed from i915_gem_gtt_finish_pages, but today this conincides with unsetting of the pages in general.) Partial VMA view is then fixed to use the new DMA lookup and properly query sg length. v2: * Checkpatch. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Tom Murphy <murphyt7@tcd.ie> Cc: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20201006092508.1064287-2-tvrtko.ursulin@linux.intel.com
-
- 25 9月, 2020 1 次提交
-
-
由 Thomas Zimmermann 提交于
GEM object functions deprecate several similar callback interfaces in struct drm_driver. This patch replaces the per-driver callbacks with per-instance callbacks in i915. v2: * move object-function instance to i915_gem_object.c (Jani) Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: NChristian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200923102159.24084-7-tzimmermann@suse.de
-
- 08 9月, 2020 1 次提交
-
-
由 Dave Airlie 提交于
These commits caused a regression on Lenovo t520 sandybridge machine belonging to reporter. We are reverting them for 5.10 for other reasons, so just do it for 5.9 as well. This reverts commit 763fedd6. Reported-by: NHarald Arnesen <harald@skogtun.org> Signed-off-by: NDave Airlie <airied@redhat.com>
-
- 07 9月, 2020 3 次提交
-
-
由 Maarten Lankhorst 提交于
Use ww locking for pin_to_display_plane for all the pinning and locking. With the locking removed from set_cache_level, we need to fix i915_gem_set_caching_ioctl to take the object reservation lock. As this is a single lock, we don't need to use the ww dance. Changes since v1: - Do not use ww locking in i915_gem_set_caching_ioctl (Thomas). Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-24-maarten.lankhorst@linux.intel.comSigned-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Maarten Lankhorst 提交于
Execbuffer submission will perform its own WW locking, and we cannot rely on the implicit lock there. This also makes it clear that the GVT code will get a lockdep splat when multiple batchbuffer shadows need to be performed in the same instance, fix that up. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-7-maarten.lankhorst@linux.intel.comSigned-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Maarten Lankhorst 提交于
i915_gem_ww_ctx is used to lock all gem bo's for pinning and memory eviction. We don't use it yet, but lets start adding the definition first. To use it, we have to pass a non-NULL ww to gem_object_lock, and don't unlock directly. It is done in i915_gem_ww_ctx_fini. Changes since v1: - Change ww_ctx and obj order in locking functions (Jonas Lahtinen) Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NThomas Hellström <thomas.hellstrom@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200819140904.1708856-6-maarten.lankhorst@linux.intel.comSigned-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 09 7月, 2020 2 次提交
-
-
由 Chris Wilson 提交于
Last user removed, remove the get_dirty_page convenience function. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200708173748.32734-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Some objects we map once during their construction, and then never access their mappings again, even if they are kept around for the duration of the driver. Keeping those pages mapped, often vmapped, is therefore wasteful and we should release the maps as soon as we no longer need them. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200708173748.32734-3-chris@chris-wilson.co.uk
-
- 21 5月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Some users want to pass NULL to drm_gem_object_put(), but those using __drm_gem_object_put() did not. Compromise, have both and let the compiler sort it out. drm_gem_fb_destroy() calls drm_gem_object_put() with NULL obj causing: [ 11.584209] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 11.584213] #PF: supervisor write access in kernel mode [ 11.584215] #PF: error_code(0x0002) - not-present page [ 11.584216] PGD 0 P4D 0 [ 11.584220] Oops: 0002 [#1] SMP NOPTI [ 11.584223] CPU: 7 PID: 1571 Comm: gnome-shell Tainted: G E 5.7.0-rc1-1-default+ #27 [ 11.584225] Hardware name: Micro-Star International Co., Ltd. MS-7A31/X370 XPOWER GAMING TITANIUM (MS-7A31), BIOS 1.MR 12/03/2019 [ 11.584237] RIP: 0010:drm_gem_fb_destroy+0x28/0x70 [drm_kms_helper] <snip> [ 11.584256] Call Trace: [ 11.584279] drm_mode_rmfb+0x189/0x1c0 [drm] [ 11.584299] ? drm_mode_rmfb+0x1c0/0x1c0 [drm] [ 11.584314] drm_ioctl_kernel+0xaa/0xf0 [drm] [ 11.584329] drm_ioctl+0x1ff/0x3b0 [drm] [ 11.584347] ? drm_mode_rmfb+0x1c0/0x1c0 [drm] [ 11.584421] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] [ 11.584427] ksys_ioctl+0x87/0xc0 [ 11.584430] __x64_sys_ioctl+0x16/0x20 [ 11.584434] do_syscall_64+0x5f/0x240 [ 11.584438] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 11.584440] RIP: 0033:0x7f0ef80f7227 Reported-by: NNirmoy Das <nirmoy.das@amd.com> Fixes: b5d25074 ("drm/gem: fold drm_gem_object_put_unlocked and __drm_gem_object_put()") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Nirmoy Das <nirmoy.das@amd.com> Cc: Emil Velikov <emil.velikov@collabora.com> Cc: Christian König <christian.koenig@amd.com>. Acked-by: NNirmoy Das <nirmoy.das@amd.com> Reviewed-by: NEmil Velikov <emil.velikov@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200520142347.29060-1-chris@chris-wilson.co.uk
-
- 20 5月, 2020 2 次提交
-
-
由 Emil Velikov 提交于
Spelling out _unlocked for each and every driver is a annoying. Especially if we consider how many drivers, do not know (or need to) about the horror stories involving struct_mutex. Just drop the suffix. It makes the API cleaner. Done via the following script: __from=drm_gem_object_put_unlocked __to=drm_gem_object_put for __file in $(git grep --name-only $__from); do sed -i "s/$__from/$__to/g" $__file; done Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NEmil Velikov <emil.velikov@collabora.com> Acked-by: NSam Ravnborg <sam@ravnborg.org> Acked-by: NThomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20200515095118.2743122-21-emil.l.velikov@gmail.com
-
由 Emil Velikov 提交于
With earlier patch we removed the overhead so now we can lift the helper into the header effectively folding it with __drm_object_put. v2: drop struct_mutex references (Daniel) Signed-off-by: NEmil Velikov <emil.velikov@collabora.com> Acked-by: Sam Ravnborg <sam@ravnborg.org> (v1) Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NThomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20200515095118.2743122-11-emil.l.velikov@gmail.com
-
- 27 2月, 2020 1 次提交
-
-
由 Jani Nikula 提交于
The #include has been splattered all over the place, but there are precious few places, all .c files, that actually need it. v2: remove leftover double newlines Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200225133131.3301-1-jani.nikula@intel.com
-
- 12 2月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Make sure we hold the rcu lock as we acquire the rcu protected reference of the object when looking it up from the associated mmap vma. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1083 Fixes: cc662126 ("drm/i915: Introduce DRM_I915_GEM_MMAP_OFFSET") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200130143931.1906301-1-chris@chris-wilson.co.uk (cherry picked from commit 280d14a6) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 08 2月, 2020 1 次提交
-
-
由 Janusz Krzysztofik 提交于
Commit 4f2a572e ("drm/i915/userptr: Never allow userptr into the mappable GGTT") made I915_GEM_MMAP_GTT IOCTLs to fail when attempted on a userptr object in order to protect from a lockdep splat. Later on, new mapping types were introduced by commit cc662126 ("drm/i915: Introduce DRM_I915_GEM_MMAP_OFFSET"). Those new mapping types suffer from the same lockdep splat issue but they now succeed when tried on top of a userptr object. Fix it. v2: Don't play with the -ENODEV driver response (Chris) Signed-off-by: NJanusz Krzysztofik <janusz.krzysztofik@linux.intel.com> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200204162302.1299516-1-chris@chris-wilson.co.uk
-
- 31 1月, 2020 1 次提交
-
-
由 Chris Wilson 提交于
Make sure we hold the rcu lock as we acquire the rcu protected reference of the object when looking it up from the associated mmap vma. Closes: https://gitlab.freedesktop.org/drm/intel/issues/1083 Fixes: cc662126 ("drm/i915: Introduce DRM_I915_GEM_MMAP_OFFSET") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200130143931.1906301-1-chris@chris-wilson.co.uk
-
- 08 1月, 2020 1 次提交
-
-
由 Matthew Auld 提交于
Attempt to split i915_gem_gtt.[ch] into more manageable chunks. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20200107134009.3255354-1-chris@chris-wilson.co.uk
-
- 23 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Since obj->frontbuffer is no longer protected by the struct_mutex, as we are processing the execbuf, it may be removed. Mark the intel_frontbuffer as rcu protected, and so acquire a reference to the struct as we track activity upon it. Closes: https://gitlab.freedesktop.org/drm/intel/issues/827 Fixes: 8e7cb179 ("drm/i915: Extract intel_frontbuffer active tracking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk (cherry picked from commit da42104f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 18 12月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Since obj->frontbuffer is no longer protected by the struct_mutex, as we are processing the execbuf, it may be removed. Mark the intel_frontbuffer as rcu protected, and so acquire a reference to the struct as we track activity upon it. Closes: https://gitlab.freedesktop.org/drm/intel/issues/827 Fixes: 8e7cb179 ("drm/i915: Extract intel_frontbuffer active tracking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: <stable@vger.kernel.org> # v5.4+ Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191218104043.3539458-1-chris@chris-wilson.co.uk
-
- 04 12月, 2019 1 次提交
-
-
由 Abdiel Janulgue 提交于
This is really just an alias of mmap_gtt. The 'mmap offset' nomenclature comes from the value returned by this ioctl which is the offset into the device fd which userpace uses with mmap(2). mmap_gtt was our initial mmap_offset implementation, this extends our CPU mmap support to allow additional fault handlers that depends on the object's backing pages. Note that we multiplex mmap_gtt and mmap_offset through the same ioctl, and use the zero extending behaviour of drm to differentiate between them, when we inspect the flags. To support multiple mmap types on an object we need to support multiple mmap_offsets for an object (each offset in the global device address space corresponding to a unique instance of the object for a file + mmap type). As we drop the simplified drm core idea of a single mmap_offset, we need to provide replacement hooks for the dumb mmap interface as well. Link: https://gitlab.freedesktop.org/mesa/mesa/merge_requests/1675 Testcase: igt/gem_mmap_offset Signed-off-by: NAbdiel Janulgue <abdiel.janulgue@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191204120032.3682839-1-chris@chris-wilson.co.uk
-
- 07 11月, 2019 2 次提交
-
-
由 Daniel Vetter 提交于
So strictly speaking the existing annotation is also ok, because we have a chain of obj->mm.lock#I915_MM_GET_PAGES -> fs_reclaim -> obj->mm.lock (the shrinker cannot get at an object while we're in get_pages, hence this is safe). But it's confusing, so try to take the right subclass of the lock. This does a bit reduce our lockdep based checking, but then it's also less fragile, in case we ever change the nesting around. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Will Deacon <will@kernel.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191104173720.2696-3-daniel.vetter@ffwll.ch
-
由 Daniel Vetter 提交于
The trouble with having a plain nesting flag for locks which do not naturally nest (unlike block devices and their partitions, which is the original motivation for nesting levels) is that lockdep will never spot a true deadlock if you screw up. This patch is an attempt at trying better, by highlighting a bit more of the actual nature of the nesting that's going on. Essentially we have two kinds of objects: - objects without pages allocated, which cannot be on any lru and are hence inaccessible to the shrinker. - objects which have pages allocated, which are on an lru, and which the shrinker can decide to throw out. For the former type of object, memory allocations while holding obj->mm.lock are permissible. For the latter they are not. And get/put_pages transitions between the two types of objects. This is still not entirely fool-proof since the rules might change. But as long as we run such a code ever at runtime lockdep should be able to observe the inconsistency and complain (like with any other lockdep class that we've split up in multiple classes). But there are a few clear benefits: - We can drop the nesting flag parameter from __i915_gem_object_put_pages, because that function by definition is never going allocate memory, and calling it on an object which doesn't have its pages allocated would be a bug. - We strictly catch more bugs, since there's not only one place in the entire tree which is annotated with the special class. All the other places that had explicit lockdep nesting annotations we're now going to leave up to lockdep again. - Specifically this catches stuff like calling get_pages from put_pages (which isn't really a good idea, if we can call get_pages so could the shrinker). I've seen patches do exactly that. Of course I fully expect CI will show me for the fool I am with this one here :-) v2: There can only be one (lockdep only has a cache for the first subclass, not for deeper ones, and we don't want to make these locks even slower). Still separate enums for better documentation. Real fix: don't forget about phys objs and pin_map(), and fix the shrinker to have the right annotations ... silly me. v3: Forgot usertptr too ... v4: Improve comment for pages_pin_count, drop the IMPORTANT comment and instead prime lockdep (Chris). v5: Appease checkpatch, no double empty lines (Chris) v6: More rebasing over selftest changes. Also somehow I forgot to push this patch :-/ Also format comments consistently while at it. v7: Fix typo in commit message (Joonas) Also drop the priming, with the lmem merge we now have allocations while holding the lmem lock, which wreaks the generic priming I've done in earlier patches. Should probably be resurrected when lmem is fixed. See commit 232a6eba Author: Matthew Auld <matthew.auld@intel.com> Date: Tue Oct 8 17:01:14 2019 +0100 drm/i915: introduce intel_memory_region I'm keeping the priming patch locally so it wont get lost. Cc: Matthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Tang, CQ" <cq.tang@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v5) Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> (v6) Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191105090148.30269-1-daniel.vetter@ffwll.ch [mlankhorst: Fix commit typos pointed out by Michael Ruhl]
-
- 24 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Replace sampling the engine state every so often with a periodic heartbeat request to measure the health of an engine. This is coupled with the forced-preemption to allow long running requests to survive so long as they do not block other users. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Reviewed-by: NJon Bloomfield <jon.bloomfield@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191023133108.21401-5-chris@chris-wilson.co.uk
-
- 22 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Separate each object class into a separate lock type to avoid lockdep cross-contamination between paths (i.e. userptr!). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191022144501.26486-1-chris@chris-wilson.co.uk
-
- 18 10月, 2019 1 次提交
-
-
由 Matthew Auld 提交于
Convert shmem to an intel_memory_region. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191018090751.28295-2-matthew.auld@intel.com
-
- 17 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Daniel Vetter uncovered a nasty cycle in using the mmu-notifiers to invalidate userptr objects which also happen to be pulled into GGTT mmaps. That is when we unbind the userptr object (on mmu invalidation), we revoke all CPU mmaps, which may then recurse into mmu invalidation. We looked for ways of breaking the cycle, but the revocation on invalidation is required and cannot be avoided. The only solution we could see was to not allow such GGTT bindings of userptr objects in the first place. In practice, no one really wants to use a GGTT mmapping of a CPU pointer... Just before Daniel's explosive lockdep patches land in v5.4-rc1, we got a genuine blip from CI: <4>[ 246.793958] ====================================================== <4>[ 246.793972] WARNING: possible circular locking dependency detected <4>[ 246.793989] 5.3.0-gbd6c56f50d15-drmtip_372+ #1 Tainted: G U <4>[ 246.794003] ------------------------------------------------------ <4>[ 246.794017] kswapd0/145 is trying to acquire lock: <4>[ 246.794030] 000000003f565be6 (&dev->struct_mutex/1){+.+.}, at: userptr_mn_invalidate_range_start+0x18f/0x220 [i915] <4>[ 246.794250] but task is already holding lock: <4>[ 246.794263] 000000001799cef9 (&anon_vma->rwsem){++++}, at: page_lock_anon_vma_read+0xe6/0x2a0 <4>[ 246.794291] which lock already depends on the new lock. <4>[ 246.794307] the existing dependency chain (in reverse order) is: <4>[ 246.794322] -> #3 (&anon_vma->rwsem){++++}: <4>[ 246.794344] down_write+0x33/0x70 <4>[ 246.794357] __vma_adjust+0x3d9/0x7b0 <4>[ 246.794370] __split_vma+0x16a/0x180 <4>[ 246.794385] mprotect_fixup+0x2a5/0x320 <4>[ 246.794399] do_mprotect_pkey+0x208/0x2e0 <4>[ 246.794413] __x64_sys_mprotect+0x16/0x20 <4>[ 246.794429] do_syscall_64+0x55/0x1c0 <4>[ 246.794443] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794456] -> #2 (&mapping->i_mmap_rwsem){++++}: <4>[ 246.794478] down_write+0x33/0x70 <4>[ 246.794493] unmap_mapping_pages+0x48/0x130 <4>[ 246.794519] i915_vma_revoke_mmap+0x81/0x1b0 [i915] <4>[ 246.794519] i915_vma_unbind+0x11d/0x4a0 [i915] <4>[ 246.794519] i915_vma_destroy+0x31/0x300 [i915] <4>[ 246.794519] __i915_gem_free_objects+0xb8/0x4b0 [i915] <4>[ 246.794519] drm_file_free.part.0+0x1e6/0x290 <4>[ 246.794519] drm_release+0xa6/0xe0 <4>[ 246.794519] __fput+0xc2/0x250 <4>[ 246.794519] task_work_run+0x82/0xb0 <4>[ 246.794519] do_exit+0x35b/0xdb0 <4>[ 246.794519] do_group_exit+0x34/0xb0 <4>[ 246.794519] __x64_sys_exit_group+0xf/0x10 <4>[ 246.794519] do_syscall_64+0x55/0x1c0 <4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794519] -> #1 (&vm->mutex){+.+.}: <4>[ 246.794519] i915_gem_shrinker_taints_mutex+0x6d/0xe0 [i915] <4>[ 246.794519] i915_address_space_init+0x9f/0x160 [i915] <4>[ 246.794519] i915_ggtt_init_hw+0x55/0x170 [i915] <4>[ 246.794519] i915_driver_probe+0xc9f/0x1620 [i915] <4>[ 246.794519] i915_pci_probe+0x43/0x1b0 [i915] <4>[ 246.794519] pci_device_probe+0x9e/0x120 <4>[ 246.794519] really_probe+0xea/0x3d0 <4>[ 246.794519] driver_probe_device+0x10b/0x120 <4>[ 246.794519] device_driver_attach+0x4a/0x50 <4>[ 246.794519] __driver_attach+0x97/0x130 <4>[ 246.794519] bus_for_each_dev+0x74/0xc0 <4>[ 246.794519] bus_add_driver+0x13f/0x210 <4>[ 246.794519] driver_register+0x56/0xe0 <4>[ 246.794519] do_one_initcall+0x58/0x300 <4>[ 246.794519] do_init_module+0x56/0x1f6 <4>[ 246.794519] load_module+0x25bd/0x2a40 <4>[ 246.794519] __se_sys_finit_module+0xd3/0xf0 <4>[ 246.794519] do_syscall_64+0x55/0x1c0 <4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794519] -> #0 (&dev->struct_mutex/1){+.+.}: <4>[ 246.794519] __lock_acquire+0x15d8/0x1e90 <4>[ 246.794519] lock_acquire+0xa6/0x1c0 <4>[ 246.794519] __mutex_lock+0x9d/0x9b0 <4>[ 246.794519] userptr_mn_invalidate_range_start+0x18f/0x220 [i915] <4>[ 246.794519] __mmu_notifier_invalidate_range_start+0x85/0x110 <4>[ 246.794519] try_to_unmap_one+0x76b/0x860 <4>[ 246.794519] rmap_walk_anon+0x104/0x280 <4>[ 246.794519] try_to_unmap+0xc0/0xf0 <4>[ 246.794519] shrink_page_list+0x561/0xc10 <4>[ 246.794519] shrink_inactive_list+0x220/0x440 <4>[ 246.794519] shrink_node_memcg+0x36e/0x740 <4>[ 246.794519] shrink_node+0xcb/0x490 <4>[ 246.794519] balance_pgdat+0x241/0x580 <4>[ 246.794519] kswapd+0x16c/0x530 <4>[ 246.794519] kthread+0x119/0x130 <4>[ 246.794519] ret_from_fork+0x24/0x50 <4>[ 246.794519] other info that might help us debug this: <4>[ 246.794519] Chain exists of: &dev->struct_mutex/1 --> &mapping->i_mmap_rwsem --> &anon_vma->rwsem <4>[ 246.794519] Possible unsafe locking scenario: <4>[ 246.794519] CPU0 CPU1 <4>[ 246.794519] ---- ---- <4>[ 246.794519] lock(&anon_vma->rwsem); <4>[ 246.794519] lock(&mapping->i_mmap_rwsem); <4>[ 246.794519] lock(&anon_vma->rwsem); <4>[ 246.794519] lock(&dev->struct_mutex/1); <4>[ 246.794519] *** DEADLOCK *** v2: Say no to mmap_ioctl Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111744 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111870Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190928082546.3473-1-chris@chris-wilson.co.uk (cherry picked from commit a4311745) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 09 10月, 2019 2 次提交
-
-
由 Matthew Auld 提交于
Volatile objects are marked as DONTNEED while pinned, therefore once unpinned the backing store can be discarded. This is limited to kernel internal objects. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Signed-off-by: NCQ Tang <cq.tang@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191008160116.18379-4-matthew.auld@intel.com
-
由 Matthew Auld 提交于
Some kernel internal objects may need to be allocated as a contiguous block, also thinking ahead the various kernel io_mapping interfaces seem to expect it, although this is purely a limitation in the kernel API...so perhaps something to be improved. Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com> Cc: Michael J Ruhl <michael.j.ruhl@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20191008160116.18379-3-matthew.auld@intel.com
-
- 04 10月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
Replace the struct_mutex requirement for pinning the i915_vma with the local vm->mutex instead. Note that the vm->mutex is tainted by the shrinker (we require unbinding from inside fs-reclaim) and so we cannot allocate while holding that mutex. Instead we have to preallocate workers to do allocate and apply the PTE updates after we have we reserved their slot in the drm_mm (using fences to order the PTE writes with the GPU work and with later unbind). In adding the asynchronous vma binding, one subtle requirement is to avoid coupling the binding fence into the backing object->resv. That is the asynchronous binding only applies to the vma timeline itself and not to the pages as that is a more global timeline (the binding of one vma does not need to be ordered with another vma, nor does the implicit GEM fencing depend on a vma, only on writes to the backing store). Keeping the vma binding distinct from the backing store timelines is verified by a number of async gem_exec_fence and gem_exec_schedule tests. The way we do this is quite simple, we keep the fence for the vma binding separate and only wait on it as required, and never add it to the obj->resv itself. Another consequence in reducing the locking around the vma is the destruction of the vma is no longer globally serialised by struct_mutex. A natural solution would be to add a kref to i915_vma, but that requires decoupling the reference cycles, possibly by introducing a new i915_mm_pages object that is own by both obj->mm and vma->pages. However, we have not taken that route due to the overshadowing lmem/ttm discussions, and instead play a series of complicated games with trylocks to (hopefully) ensure that only one destruction path is called! v2: Add some commentary, and some helpers to reduce patch churn. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
-
- 02 10月, 2019 2 次提交
-
-
由 Chris Wilson 提交于
We repeat obj->ops->flags in our object checks, so pull that into its own little helper for clarity. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191002123014.1545-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Daniel Vetter uncovered a nasty cycle in using the mmu-notifiers to invalidate userptr objects which also happen to be pulled into GGTT mmaps. That is when we unbind the userptr object (on mmu invalidation), we revoke all CPU mmaps, which may then recurse into mmu invalidation. We looked for ways of breaking the cycle, but the revocation on invalidation is required and cannot be avoided. The only solution we could see was to not allow such GGTT bindings of userptr objects in the first place. In practice, no one really wants to use a GGTT mmapping of a CPU pointer... Just before Daniel's explosive lockdep patches land in v5.4-rc1, we got a genuine blip from CI: <4>[ 246.793958] ====================================================== <4>[ 246.793972] WARNING: possible circular locking dependency detected <4>[ 246.793989] 5.3.0-gbd6c56f50d15-drmtip_372+ #1 Tainted: G U <4>[ 246.794003] ------------------------------------------------------ <4>[ 246.794017] kswapd0/145 is trying to acquire lock: <4>[ 246.794030] 000000003f565be6 (&dev->struct_mutex/1){+.+.}, at: userptr_mn_invalidate_range_start+0x18f/0x220 [i915] <4>[ 246.794250] but task is already holding lock: <4>[ 246.794263] 000000001799cef9 (&anon_vma->rwsem){++++}, at: page_lock_anon_vma_read+0xe6/0x2a0 <4>[ 246.794291] which lock already depends on the new lock. <4>[ 246.794307] the existing dependency chain (in reverse order) is: <4>[ 246.794322] -> #3 (&anon_vma->rwsem){++++}: <4>[ 246.794344] down_write+0x33/0x70 <4>[ 246.794357] __vma_adjust+0x3d9/0x7b0 <4>[ 246.794370] __split_vma+0x16a/0x180 <4>[ 246.794385] mprotect_fixup+0x2a5/0x320 <4>[ 246.794399] do_mprotect_pkey+0x208/0x2e0 <4>[ 246.794413] __x64_sys_mprotect+0x16/0x20 <4>[ 246.794429] do_syscall_64+0x55/0x1c0 <4>[ 246.794443] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794456] -> #2 (&mapping->i_mmap_rwsem){++++}: <4>[ 246.794478] down_write+0x33/0x70 <4>[ 246.794493] unmap_mapping_pages+0x48/0x130 <4>[ 246.794519] i915_vma_revoke_mmap+0x81/0x1b0 [i915] <4>[ 246.794519] i915_vma_unbind+0x11d/0x4a0 [i915] <4>[ 246.794519] i915_vma_destroy+0x31/0x300 [i915] <4>[ 246.794519] __i915_gem_free_objects+0xb8/0x4b0 [i915] <4>[ 246.794519] drm_file_free.part.0+0x1e6/0x290 <4>[ 246.794519] drm_release+0xa6/0xe0 <4>[ 246.794519] __fput+0xc2/0x250 <4>[ 246.794519] task_work_run+0x82/0xb0 <4>[ 246.794519] do_exit+0x35b/0xdb0 <4>[ 246.794519] do_group_exit+0x34/0xb0 <4>[ 246.794519] __x64_sys_exit_group+0xf/0x10 <4>[ 246.794519] do_syscall_64+0x55/0x1c0 <4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794519] -> #1 (&vm->mutex){+.+.}: <4>[ 246.794519] i915_gem_shrinker_taints_mutex+0x6d/0xe0 [i915] <4>[ 246.794519] i915_address_space_init+0x9f/0x160 [i915] <4>[ 246.794519] i915_ggtt_init_hw+0x55/0x170 [i915] <4>[ 246.794519] i915_driver_probe+0xc9f/0x1620 [i915] <4>[ 246.794519] i915_pci_probe+0x43/0x1b0 [i915] <4>[ 246.794519] pci_device_probe+0x9e/0x120 <4>[ 246.794519] really_probe+0xea/0x3d0 <4>[ 246.794519] driver_probe_device+0x10b/0x120 <4>[ 246.794519] device_driver_attach+0x4a/0x50 <4>[ 246.794519] __driver_attach+0x97/0x130 <4>[ 246.794519] bus_for_each_dev+0x74/0xc0 <4>[ 246.794519] bus_add_driver+0x13f/0x210 <4>[ 246.794519] driver_register+0x56/0xe0 <4>[ 246.794519] do_one_initcall+0x58/0x300 <4>[ 246.794519] do_init_module+0x56/0x1f6 <4>[ 246.794519] load_module+0x25bd/0x2a40 <4>[ 246.794519] __se_sys_finit_module+0xd3/0xf0 <4>[ 246.794519] do_syscall_64+0x55/0x1c0 <4>[ 246.794519] entry_SYSCALL_64_after_hwframe+0x49/0xbe <4>[ 246.794519] -> #0 (&dev->struct_mutex/1){+.+.}: <4>[ 246.794519] __lock_acquire+0x15d8/0x1e90 <4>[ 246.794519] lock_acquire+0xa6/0x1c0 <4>[ 246.794519] __mutex_lock+0x9d/0x9b0 <4>[ 246.794519] userptr_mn_invalidate_range_start+0x18f/0x220 [i915] <4>[ 246.794519] __mmu_notifier_invalidate_range_start+0x85/0x110 <4>[ 246.794519] try_to_unmap_one+0x76b/0x860 <4>[ 246.794519] rmap_walk_anon+0x104/0x280 <4>[ 246.794519] try_to_unmap+0xc0/0xf0 <4>[ 246.794519] shrink_page_list+0x561/0xc10 <4>[ 246.794519] shrink_inactive_list+0x220/0x440 <4>[ 246.794519] shrink_node_memcg+0x36e/0x740 <4>[ 246.794519] shrink_node+0xcb/0x490 <4>[ 246.794519] balance_pgdat+0x241/0x580 <4>[ 246.794519] kswapd+0x16c/0x530 <4>[ 246.794519] kthread+0x119/0x130 <4>[ 246.794519] ret_from_fork+0x24/0x50 <4>[ 246.794519] other info that might help us debug this: <4>[ 246.794519] Chain exists of: &dev->struct_mutex/1 --> &mapping->i_mmap_rwsem --> &anon_vma->rwsem <4>[ 246.794519] Possible unsafe locking scenario: <4>[ 246.794519] CPU0 CPU1 <4>[ 246.794519] ---- ---- <4>[ 246.794519] lock(&anon_vma->rwsem); <4>[ 246.794519] lock(&mapping->i_mmap_rwsem); <4>[ 246.794519] lock(&anon_vma->rwsem); <4>[ 246.794519] lock(&dev->struct_mutex/1); <4>[ 246.794519] *** DEADLOCK *** v2: Say no to mmap_ioctl Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111744 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111870Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190928082546.3473-1-chris@chris-wilson.co.uk
-
- 03 9月, 2019 1 次提交
-
-
由 Chris Wilson 提交于
obj->pin_global was originally used as a means to keep the shrinker off the active scanout, but we use the vma->pin_count itself for that and the obj->frontbuffer to delay shrinking active framebuffers. The other role that obj->pin_global gained was for spotting display objects inside GEM and working harder to keep those coherent; for which we can again simply inspect obj->frontbuffer directly. Coming up next, we will want to manipulate the pin_global counter outside of the principle locks, so would need to make pin_global atomic. However, since obj->frontbuffer is already managed atomically, it makes sense to use that the primary key for display objects instead of having pin_global. Ville pointed out the principle difference is that obj->frontbuffer is set for as long as an intel_framebuffer is attached to an object, but obj->pin_global was only raised for as long as the object was active. In practice, this means that we consider the object as being on the scanout for longer than is strictly required, causing us to be more proactive in flushing -- though it should be true that we would have flushed eventually when the back became the front, except that on the flip path that flush is async but when hit from another ioctl it will be synchronous. v2: i915_gem_object_is_framebuffer() Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190902040303.14195-5-chris@chris-wilson.co.uk
-