- 05 8月, 2016 3 次提交
-
-
由 Chris Wilson 提交于
Inside the kthread context, we can't be interrupted by signals so touching the mm.interruptible flag is pointless and wait-request now consumes EIO itself. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470388464-28458-4-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We can completely avoid taking the struct_mutex around the non-blocking waits by switching over to the RCU request management (trading the mutex for a RCU read lock and some complex atomic operations). The improvement is that we gain further contention reduction, and overall the code become simpler due to the reduced mutex dancing. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470388464-28458-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We are motivated to avoid using a bitfield for obj->active for a couple of reasons. Firstly, we wish to document our lockless read of obj->active using READ_ONCE inside i915_gem_busy_ioctl() and that requires an integral type (i.e. not a bitfield). Secondly, gcc produces abysmal code when presented with a bitfield and that shows up high on the profiles of request tracking (mainly due to excess memory traffic as it converts the bitfield to a register and back and generates frequent AGI in the process). v2: BIT, break up a long line in compute the other engines, new paint for i915_gem_object_is_active (now i915_gem_object_get_active). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470324762-2545-23-git-send-email-chris@chris-wilson.co.uk
-
- 04 8月, 2016 5 次提交
-
-
由 Chris Wilson 提交于
There is only one wait on request function now, so drop the "expert" indication of leading __. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470293567-10811-21-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The future annotations will track the locking used for access to ensure that it is always sufficient. We make the preparations now to present the API ahead and to make sure that GCC can eliminate the unused parameter. Before: 6298417 3619610 696320 10614347 a1f64b vmlinux After: 6298417 3619610 696320 10614347 a1f64b vmlinux (with i915 builtin) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470293567-10811-12-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the future, we will want to add annotations to the i915_gem_active struct. The API is thus expanded to hide direct access to the contents of i915_gem_active and mediated instead through a number of helpers. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470293567-10811-11-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the next patch, request tracking is made more generic and for that we need a new expanded struct and to separate out the logic changes from the mechanical churn, we split out the structure renaming into this patch. v2: Writer's block. Add some spiel about why we track requests. v3: Now i915_gem_active. v4: Now with i915_gem_active_set() for attaching to the active request. v5: Use i915_gem_active_set() from inside the retirement handlers Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470293567-10811-10-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
When we call i915_vma_unbind(), we will wait upon outstanding rendering. This will also trigger a retirement phase, which may update the object lists. If, we extend request tracking to the VMA itself (rather than keep it at the encompassing object), then there is a potential that the obj->vma_list be modified for other elements upon i915_vma_unbind(). As a result, if we walk over the object list and call i915_vma_unbind(), we need to be prepared for that list to change. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1470293567-10811-8-git-send-email-chris@chris-wilson.co.uk
-
- 20 7月, 2016 4 次提交
-
-
由 Chris Wilson 提交于
Whilst this ultimately wraps kref_put_mutex(), our goal here is the lockless variant, so keep the _unlocked() suffix until we need it no more. s/drm_gem_object_unreference_unlocked/i915_gem_object_put_unlocked/ Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1469005202-9659-7-git-send-email-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1469017917-15134-6-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Ultimately wraps kref_put(), so adopt its nomenclature for consistency with other subsystems. s/drm_gem_object_unreference/i915_gem_object_put/ Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1469005202-9659-6-git-send-email-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1469017917-15134-5-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Ultimately wraps kref_get(), so adopt its nomenclature for consistency with other subsystems. s/drm_gem_object_reference/i915_gem_object_get/ Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1469005202-9659-5-git-send-email-chris@chris-wilson.co.ukReviewed-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1469017917-15134-4-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Now that we derive requests from struct fence, swap over to its nomenclature for references. It's shorter and more idiomatic across the kernel. s/i915_gem_request_reference/i915_gem_request_get/ s/i915_gem_request_unreference/i915_gem_request_put/ Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1469005202-9659-2-git-send-email-chris@chris-wilson.co.uk Link: http://patchwork.freedesktop.org/patch/msgid/1469017917-15134-1-git-send-email-chris@chris-wilson.co.uk
-
- 20 5月, 2016 2 次提交
-
-
由 Dave Gordon 提交于
The existing for_each_sg_page() iterator is somewhat heavyweight, and is limiting i915 driver performance in a few benchmarks. So here we introduce somewhat lighter weight iterators, primarily for use with GEM objects or other case where we need only deal with whole aligned pages. Unlike the old iterator, the new iterators use an internal state structure which is not intended to be accessed by the caller; instead each takes as a parameter an output variable which is set before each iteration. This makes them particularly simple to use :) One of the new iterators provides the caller with the DMA address of each page in turn; the other provides the 'struct page' pointer required by many memory management operations. Various uses of for_each_sg_page() are then converted to the new macros. v2: Force inlining of the sg_iter constructor and make the union anonymous. Signed-off-by: NDave Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1463741647-15666-4-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
userptr directly only uses drm_device in a single interface where it meant to use drm_i915_private (everywhere else we have to derive it from the drm_i915_gem_object and so require going from drm_device). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1463671036-3235-1-git-send-email-chris@chris-wilson.co.uk
-
- 18 4月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
Holding a reference to the containing task_struct is not sufficient to prevent the mm_struct from being reaped under memory pressure. If this happens whilst we are calling get_user_pages(), explosions erupt - sometimes an immediate GPF, sometimes page flag corruption. To prevent the target mm from being reaped as we are reading from it, acquire a reference before we begin. Testcase: igt/gem_shrink/*userptr Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-2-git-send-email-chris@chris-wilson.co.uk (cherry picked from commit 40313f0c) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 14 4月, 2016 2 次提交
-
-
由 Chris Wilson 提交于
Reporting -EIO from i915_wait_request() has proven very troublematic over the years, with numerous hard-to-reproduce bugs cropping up in the corner case of where a reset occurs and the code wasn't expecting such an error. If the we reset the GPU or have detected a hang and wish to reset the GPU, the request is forcibly complete and the wait broken. Currently, we report either -EAGAIN or -EIO in order for the caller to retreat and restart the wait (if appropriate) after dropping and then reacquiring the struct_mutex (essential to allow the GPU reset to proceed). However, if we take the view that the request is complete (no further work will be done on it by the GPU because it is dead and soon to be reset), then we can proceed with the task at hand and then drop the struct_mutex allowing the reset to occur. This transfers the burden of checking whether it is safe to proceed to the caller, which in all but one instance it is safe - completely eliminating the source of all spurious -EIO. Of note, we only have two API entry points where we expect that userspace can observe an EIO. First is when submitting an execbuf, if the GPU is terminally wedged, then the operation cannot succeed and an -EIO is reported. Secondly, existing userspace uses the throttle ioctl to detect an already wedged GPU before starting using HW acceleration (or to confirm that the GPU is wedged after an error condition). So if the GPU is wedged when the user calls throttle, also report -EIO. v2: Split more carefully the change to i915_wait_request() and assorted ABI from the reset handling. v3: Add a couple of WARN_ON(EIO) to the interruptible modesetting code so that we don't start to leak EIO there in future (and break our hang resistant modesetting). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-9-git-send-email-chris@chris-wilson.co.uk Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-1-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As the request is only valid during the same global reset epoch, we can record the current reset_counter when constructing the request and reuse it when waiting upon that request in future. This removes a very hairy atomic check serialised by the struct_mutex at the time of waiting and allows us to transfer those waits to a central dispatcher for all waiters and all requests. PS: With per-engine resets, we obviously cannot assume a global reset epoch for the requests - a per-engine epoch makes the most sense. The challenge then is how to handle checking in the waiter for when to break the wait, as the fine-grained reset may also want to requeue the request (i.e. the assumption that just because the epoch changes the request is completed may be broken - or we just avoid breaking that assumption with the fine-grained resets). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-7-git-send-email-chris@chris-wilson.co.uk
-
- 12 4月, 2016 4 次提交
-
-
由 Chris Wilson 提交于
Since we only ever use the drm_i915_private from the stored i915_mm_struct->dev, save some electrons by storing the right backpointer. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Holding a reference to the containing task_struct is not sufficient to prevent the mm_struct from being reaped under memory pressure. If this happens whilst we are calling get_user_pages(), explosions erupt - sometimes an immediate GPF, sometimes page flag corruption. To prevent the target mm from being reaped as we are reading from it, acquire a reference before we begin. Testcase: igt/gem_shrink/*userptr Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-2-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Chris Wilson 提交于
I have instances where I want to use drm_malloc_ab() but with a custom gfp mask. And with those, where I want a temporary allocation, I want to try a high-order kmalloc() before using a vmalloc(). So refactor my usage into drm_malloc_gfp(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: dri-devel@lists.freedesktop.org Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Acked-by: NDave Airlie <airlied@redhat.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-6-git-send-email-chris@chris-wilson.co.uk
-
- 05 4月, 2016 1 次提交
-
-
由 Kirill A. Shutemov 提交于
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 3月, 2016 1 次提交
-
-
由 Tvrtko Ursulin 提交于
commit e5756c10 Author: Imre Deak <imre.deak@intel.com> Date: Fri Aug 14 18:43:30 2015 +0300 drm/i915/bxt: don't allow cached GEM mappings on A stepping Added an exception of disallowing snooping for Broxton A stepping hardware but userptr was still enabling it regardless. Move the check to HAS_SNOOP now that it is used from multiple call sites and use it. v2: Userptr cannot be supported when it cannot be coherent and generalize the code better. (Chris Wilson) v3: Make has_snoop true only when !has_llc. (Chris Wilson) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1456920631-34302-1-git-send-email-tvrtko.ursulin@linux.intel.com
-
- 26 2月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
Elsewhere we have adopted the convention of using '_link' to denote elements in the list (and '_list' for the actual list_head itself), and that the name should indicate which list the link belongs to (and preferrably not just where the link is being stored). s/vma_link/obj_link/ (we iterate over obj->vma_list) s/mm_list/vm_link/ (we iterate over vm->[in]active_list) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 16 2月, 2016 1 次提交
-
-
由 Dave Hansen 提交于
For protection keys, we need to understand whether protections should be enforced in software or not. In general, we enforce protections when working on our own task, but not when on others. We call these "current" and "remote" operations. This patch introduces a new get_user_pages() variant: get_user_pages_remote() Which is a replacement for when get_user_pages() is called on non-current tsk/mm. We also introduce a new gup flag: FOLL_REMOTE which can be used for the "__" gup variants to get this new behavior. The uprobes is_trap_at_addr() location holds mmap_sem and calls get_user_pages(current->mm) on an instruction address. This makes it a pretty unique gup caller. Being an instruction access and also really originating from the kernel (vs. the app), I opted to consider this a 'remote' access where protection keys will not be enforced. Without protection keys, this patch should not change any behavior. Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: jack@suse.cz Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/20160212210154.3F0E51EA@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 2月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
commit 033908ae Author: Dave Gordon <david.s.gordon@intel.com> Date: Thu Dec 10 18:51:23 2015 +0000 drm/i915: mark GEM object pages dirty when mapped & written by the CPU introduced a check into i915_gem_object_get_dirty_pages() that returned a NULL pointer when called with a bad object, one that was not backed by shmemfs. This WARN was too strict as we can work on all struct page backed objects, and resulted in a WARN + GPF for existing userspace. In order to differentiate the various types of objects, add a new flags field to the i915_gem_object_ops struct to describe their capabilities, with the first flag being whether the object has struct pages. v2: Drop silly const before an integer in the structure declaration. Testcase: igt/gem_userptr_blits/relocations Reported-and-tested-by: NKristian Høgsberg Kristensen <krh@bitplanet.net> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: Kristian Høgsberg Kristensen <krh@bitplanet.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NKristian Høgsberg Kristensen <krh@bitplanet.net> Tested-by: NMichal Winiarski <michal.winiarski@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Fixes: 033908ae ("drm/i915: mark GEM object pages dirty when mapped & written by the CPU") Link: http://patchwork.freedesktop.org/patch/msgid/1453487551-16799-1-git-send-email-chris@chris-wilson.co.uk (cherry picked from commit de472664) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 04 2月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
commit 033908ae Author: Dave Gordon <david.s.gordon@intel.com> Date: Thu Dec 10 18:51:23 2015 +0000 drm/i915: mark GEM object pages dirty when mapped & written by the CPU introduced a check into i915_gem_object_get_dirty_pages() that returned a NULL pointer when called with a bad object, one that was not backed by shmemfs. This WARN was too strict as we can work on all struct page backed objects, and resulted in a WARN + GPF for existing userspace. In order to differentiate the various types of objects, add a new flags field to the i915_gem_object_ops struct to describe their capabilities, with the first flag being whether the object has struct pages. v2: Drop silly const before an integer in the structure declaration. Testcase: igt/gem_userptr_blits/relocations Reported-and-tested-by: NKristian Høgsberg Kristensen <krh@bitplanet.net> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: Kristian Høgsberg Kristensen <krh@bitplanet.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NKristian Høgsberg Kristensen <krh@bitplanet.net> Tested-by: NMichal Winiarski <michal.winiarski@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1453487551-16799-1-git-send-email-chris@chris-wilson.co.uk
-
- 26 1月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
The generic interval tree we use to speed up range invalidation is an augmented rbtree that can report all overlapping intervals for a given range. Therefore we do not need to degrade to a linear list if we find overlapping objects. Oops. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1453397563-2848-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 10月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
Pinning a userptr onto the hardware raises interesting questions about the lifetime of such a surface as the framebuffer extends that life beyond the client's address space. That is the hardware will need to keep scanning out from the backing storage even after the client wants to remap its address space. As the hardware pins the backing storage, the userptr becomes invalid and this raises a WARN when the clients tries to unmap its address space. The situation can be even more complicated when the buffer is passed between processes, between a client and display server, where the lifetime and hardware access is even more confusing. Deny it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 06 10月, 2015 3 次提交
-
-
由 Chris Wilson 提交于
Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: NMichał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
The userptr worker allows for a slight race condition where upon there may two or more threads calling get_user_pages for the same object. When we have the array of pages, then we serialise the update of the object. However, the worker should only overwrite the obj->userptr.work pointer if and only if it is the active one. Currently we clear it for a secondary worker with the effect that we may rarely force a second lookup. v2: Rebase and rename a variable to avoid 80cols v3: Mention v2 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 8月, 2015 1 次提交
-
-
由 Michel Thierry 提交于
GTT was only 32b and its max value is 4GB. In order to allow objects bigger than 4GB in 48b PPGTT, i915_gem_userptr_ioctl we could check against max 48b range (1ULL << 48). But since the check no longer applies, just kill the limit. v2: Use the default ctx to infer the ppgtt max size (Akash). v3: Just kill the limit, it was only there for early detection of an error when used for execbuffer (Chris). Cc: Akash Goel <akash.goel@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Reviewed-by: NAkash Goel <akash.goel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 7月, 2015 1 次提交
-
-
由 Imre Deak 提交于
We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 5月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
Mika encountered one pathological scenario under X where acquiring all the mm locks (required to insert a mmu notifier) was very slow, so slow that by the time we tried to lock the struct_mutex with the usual call to i915_mutex_lock_interruptible(), X's signal timer had fired causing us to restart the ioctl (and so looped indefinitely). While I suspect this is the result of another bug (something leaking mm perhaps?) we can forgo the error checking and interuptible nature of the lock here so we only have to pay the expense once and get on with it. This does expose the userptr creation routine to a driver livelock though by not being interruptible. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> [danvet: Init ret to avoid issues reported by PRTS.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 5月, 2015 1 次提交
-
-
由 Maarten Lankhorst 提交于
This results in a warning when building out of tree: "cc1: warning: include/drm: No such file or directory [enabled by default]" Most code already uses #include <drm/foo.h> correctly, so fix the instances that don't. Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 2月, 2015 1 次提交
-
-
由 Michał Winiarski 提交于
It's possible for invalidate_range_start mmu notifier callback to race against userptr object release. If the gem object was released prior to obtaining the spinlock in invalidate_range_start we're hitting null pointer dereference. Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close-overlap Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [Jani: added code comment suggested by Chris] Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 29 9月, 2014 2 次提交
-
-
由 Tvrtko Ursulin 提交于
sg_alloc_table_from_pages() can build us a table with coalesced ranges which means we need to iterate over pages and not sg table entries when releasing page references. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Barbalho, Rafael" <rafael.barbalho@intel.com> Tested-by: NRafael Barbalho <rafael.barbalho@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [danvet: Remove unused local variable sg.] Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Chris Wilson 提交于
If we fail to create our mmu notification, we report the error back and currently store the error inside the i915_mm_struct. This not only causes subsequent registerations of the same mm to fail (an issue if the first was interrupted by a signal and needed to be restarted) but also causes us to eventually try and free the error pointer. [ 73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c [ 73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.420065] PGD 8650c067 PUD 870bb067 PMD 0 [ 73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC [ 73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G W 3.17.0-rc6+ #1561 [ 73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011 [ 73.421405] Workqueue: events __i915_mm_struct_free__worker [ 73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000 [ 73.422051] RIP: 0010:[<ffffffff8114af33>] [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.422410] RSP: 0018:ffff88008816bd50 EFLAGS: 00010286 [ 73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000 [ 73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044 [ 73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000 [ 73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044 [ 73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040 [ 73.424682] FS: 0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000 [ 73.425099] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0 [ 73.426157] Stack: [ 73.426597] ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008 [ 73.427096] ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28 [ 73.427603] ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af [ 73.428119] Call Trace: [ 73.428606] [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80 [ 73.429116] [<ffffffff8106321a>] process_one_work+0x1ba/0x610 [ 73.429632] [<ffffffff810631af>] ? process_one_work+0x14f/0x610 [ 73.430153] [<ffffffff810636db>] worker_thread+0x6b/0x4a0 [ 73.430671] [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10 [ 73.431501] [<ffffffff81063670>] ? process_one_work+0x610/0x610 [ 73.432030] [<ffffffff8106a206>] kthread+0xf6/0x110 [ 73.432561] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.433100] [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0 [ 73.433644] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65 [ 73.435942] RIP [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.437017] RSP <ffff88008816bd50> [ 73.437704] CR2: 000000000000004c Fixes regression from commit ad46cb53 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Aug 7 14:20:40 2014 +0100 drm/i915: Prevent recursive deadlock on releasing a busy userptr Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84207 Testcase: igt/gem_render_copy_redux Testcase: igt/gem_userptr_blits/create-destroy-sync Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 08 9月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds. Tainted: GF O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces D 0000000000000000 0 1632 1590 0x00000082 ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010 0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010 ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace: [<ffffffff81582499>] schedule+0x29/0x70 [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10 [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220 [<ffffffff81583c53>] mutex_lock+0x23/0x40 [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm] [<ffffffff8115a483>] remove_vma+0x33/0x70 [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170 [<ffffffff8104d6eb>] mmput+0x6b/0x100 [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915] [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915] [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm] [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm] [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm] [<ffffffff8127ffeb>] idr_for_each+0xab/0x100 [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm] [<ffffffff81583c46>] ? mutex_lock+0x16/0x40 [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm] [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm] [<ffffffff8118d482>] __fput+0xb2/0x260 [<ffffffff8118d6de>] ____fput+0xe/0x10 [<ffffffff8106f27f>] task_work_run+0x8f/0xf0 [<ffffffff81052228>] do_exit+0x1a8/0x480 [<ffffffff81052551>] do_group_exit+0x51/0xc0 [<ffffffff810525d7>] SyS_exit_group+0x17/0x20 [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: NJacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Tested-by: N"Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: N"Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-