- 08 2月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
commit 033908ae Author: Dave Gordon <david.s.gordon@intel.com> Date: Thu Dec 10 18:51:23 2015 +0000 drm/i915: mark GEM object pages dirty when mapped & written by the CPU introduced a check into i915_gem_object_get_dirty_pages() that returned a NULL pointer when called with a bad object, one that was not backed by shmemfs. This WARN was too strict as we can work on all struct page backed objects, and resulted in a WARN + GPF for existing userspace. In order to differentiate the various types of objects, add a new flags field to the i915_gem_object_ops struct to describe their capabilities, with the first flag being whether the object has struct pages. v2: Drop silly const before an integer in the structure declaration. Testcase: igt/gem_userptr_blits/relocations Reported-and-tested-by: NKristian Høgsberg Kristensen <krh@bitplanet.net> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Dave Gordon <david.s.gordon@intel.com> Cc: Kristian Høgsberg Kristensen <krh@bitplanet.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDave Gordon <david.s.gordon@intel.com> Reviewed-by: NKristian Høgsberg Kristensen <krh@bitplanet.net> Tested-by: NMichal Winiarski <michal.winiarski@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Fixes: 033908ae ("drm/i915: mark GEM object pages dirty when mapped & written by the CPU") Link: http://patchwork.freedesktop.org/patch/msgid/1453487551-16799-1-git-send-email-chris@chris-wilson.co.uk (cherry picked from commit de472664) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 13 10月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
Pinning a userptr onto the hardware raises interesting questions about the lifetime of such a surface as the framebuffer extends that life beyond the client's address space. That is the hardware will need to keep scanning out from the backing storage even after the client wants to remap its address space. As the hardware pins the backing storage, the userptr becomes invalid and this raises a WARN when the clients tries to unmap its address space. The situation can be even more complicated when the buffer is passed between processes, between a client and display server, where the lifetime and hardware access is even more confusing. Deny it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 06 10月, 2015 3 次提交
-
-
由 Chris Wilson 提交于
Whilst discussing possible ways to trigger an invalidate_range on a userptr with an aliased GGTT mmapping (and so cause a struct_mutex deadlock), the conclusion is that we can, and we must, prevent any possible deadlock by avoiding taking the mutex at all during invalidate_range. This has numerous advantages all of which stem from avoid the sleeping function from inside the unknown context. In particular, it simplifies the invalidate_range because we no longer have to juggle the spinlock/mutex and can just hold the spinlock for the entire walk. To compensate, we have to make get_pages a bit more complicated in order to serialise with a pending cancel_userptr worker. As we hold the struct_mutex, we have no choice but to return EAGAIN and hope that the worker is then flushed before we retry after reacquiring the struct_mutex. The important caveat is that the invalidate_range itself is no longer synchronous. There exists a small but definite period in time in which the old PTE's page remain accessible via the GPU. Note however that the physical pages themselves are not invalidated by the mmu_notifier, just the CPU view of the address space. The impact should be limited to a delay in pages being flushed, rather than a possibility of writing to the wrong pages. The only race condition that this worsens is remapping an userptr active on the GPU where fresh work may still reference the old pages due to struct_mutex contention. Given that userspace is racing with the GPU, it is fair to say that the results are undefined. v2: Only queue (and importantly only take one refcnt) the worker once. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Michał Winiarski found a really evil way to trigger a struct_mutex deadlock with userptr. He found that if he allocated a userptr bo and then GTT mmaped another bo, or even itself, at the same address as the userptr using MAP_FIXED, he could then cause a deadlock any time we then had to invalidate the GTT mmappings (so at will). Tvrtko then found by repeatedly allocating GTT mmappings he could alias with an old userptr mmap and also trigger the deadlock. To counter act the deadlock, we make the observation that we only need to take the struct_mutex if the object has any pages to revoke, and that before userspace can alias with the userptr address space, it must have invalidated the userptr->pages. Thus if we can check for those pages outside of the struct_mutex, we can avoid the deadlock. To do so we introduce a separate flag for userptr objects that we can inspect from the mmu-notifier underneath its spinlock. The patch makes one eye-catching change. That is the removal serial=0 after detecting a to-be-freed object inside the invalidate walker. I felt setting serial=0 was a questionable pessimisation: it denies us the chance to reuse the current iterator for the next loop (before it is freed) and being explicit makes the reader question the validity of the locking (since the object-free race could occur elsewhere). The serialisation of the iterator is through the spinlock, if the object is freed before the next loop then the notifier.serial will be incremented and we start the walk from the beginning as we detect the invalid cache. To try and tame the error paths and interactions with the userptr->active flag, we have to do a fair amount of rearranging of get_pages_userptr(). v2: Grammar fixes v3: Reorder set-active so that it is only set when obj->pages is set (and so needs cancellation). Only the order of setting obj->pages and the active-flag is crucial. Calling gup after invalidate-range begin means the userptr sees the new set of backing storage (and so will not need to invalidate its new pages), but we have to be careful not to set the active-flag prior to successfully establishing obj->pages. v4: Take the active->flag early so we know in the mmu-notifier when we have to cancel a pending gup-worker. v5: Rearrange the error path so that is not so convoluted v6: Set pinned to 0 when negative before calling release_pages() Reported-by: NMichał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_userptr_blits/map-fixed* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
The userptr worker allows for a slight race condition where upon there may two or more threads calling get_user_pages for the same object. When we have the array of pages, then we serialise the update of the object. However, the worker should only overwrite the obj->userptr.work pointer if and only if it is the active one. Currently we clear it for a secondary worker with the effect that we may rarely force a second lookup. v2: Rebase and rename a variable to avoid 80cols v3: Mention v2 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 8月, 2015 1 次提交
-
-
由 Michel Thierry 提交于
GTT was only 32b and its max value is 4GB. In order to allow objects bigger than 4GB in 48b PPGTT, i915_gem_userptr_ioctl we could check against max 48b range (1ULL << 48). But since the check no longer applies, just kill the limit. v2: Use the default ctx to infer the ppgtt max size (Akash). v3: Just kill the limit, it was only there for early detection of an error when used for execbuffer (Chris). Cc: Akash Goel <akash.goel@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMichel Thierry <michel.thierry@intel.com> Reviewed-by: NAkash Goel <akash.goel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 7月, 2015 1 次提交
-
-
由 Imre Deak 提交于
We have 3 types of DMA mappings for GEM objects: 1. physically contiguous for stolen and for objects needing contiguous memory 2. DMA-buf mappings imported via a DMA-buf attach operation 3. SG DMA mappings for shmem backed and userptr objects For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the corresponding backing pages and so in practice we create/release the mapping in the object's get_pages/put_pages callback. For 3. the lifetime of the mapping matches that of any existing GPU binding of the object, so we'll create the mapping when the object is bound to the first vma and release the mapping when the object is unbound from its last vma. Since the object can be bound to multiple vmas, we can end up creating a new DMA mapping in the 3. case even if the object already had one. This is not allowed by the DMA API and can lead to leaked mapping data and IOMMU memory space starvation in certain cases. For example HW IOMMU drivers (intel_iommu) allocate a new range from their memory space whenever a mapping is created, silently overriding a pre-existing mapping. Fix this by moving the creation/removal of DMA mappings to the object's get_pages/put_pages callbacks. These callbacks already check for and do an early return in case of any nested calls. This way objects of the 3. case also become more like the other object types. I noticed this issue by enabling DMA debugging, which got disabled after a while due to its internal mapping tables getting full. It also reported errors in connection to random other drivers that did a DMA mapping for an address that was previously mapped by i915 but was never released. Besides these diagnostic messages and the memory space starvation problem for IOMMUs, I'm not aware of this causing a real issue. The fix is based on a patch from Chris. v2: - move the DMA mapping create/remove calls to the get_pages/put_pages callbacks instead of adding new callbacks for these (Chris) v3: - also fix the get_page cache logic on the userptr async path (Chris) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 5月, 2015 1 次提交
-
-
由 Chris Wilson 提交于
Mika encountered one pathological scenario under X where acquiring all the mm locks (required to insert a mmu notifier) was very slow, so slow that by the time we tried to lock the struct_mutex with the usual call to i915_mutex_lock_interruptible(), X's signal timer had fired causing us to restart the ioctl (and so looped indefinitely). While I suspect this is the result of another bug (something leaking mm perhaps?) we can forgo the error checking and interuptible nature of the lock here so we only have to pay the expense once and get on with it. This does expose the userptr creation routine to a driver livelock though by not being interruptible. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> [danvet: Init ret to avoid issues reported by PRTS.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 5月, 2015 1 次提交
-
-
由 Maarten Lankhorst 提交于
This results in a warning when building out of tree: "cc1: warning: include/drm: No such file or directory [enabled by default]" Most code already uses #include <drm/foo.h> correctly, so fix the instances that don't. Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 2月, 2015 1 次提交
-
-
由 Michał Winiarski 提交于
It's possible for invalidate_range_start mmu notifier callback to race against userptr object release. If the gem object was released prior to obtaining the spinlock in invalidate_range_start we're hitting null pointer dereference. Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close-overlap Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [Jani: added code comment suggested by Chris] Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 29 9月, 2014 2 次提交
-
-
由 Tvrtko Ursulin 提交于
sg_alloc_table_from_pages() can build us a table with coalesced ranges which means we need to iterate over pages and not sg table entries when releasing page references. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: "Barbalho, Rafael" <rafael.barbalho@intel.com> Tested-by: NRafael Barbalho <rafael.barbalho@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [danvet: Remove unused local variable sg.] Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Chris Wilson 提交于
If we fail to create our mmu notification, we report the error back and currently store the error inside the i915_mm_struct. This not only causes subsequent registerations of the same mm to fail (an issue if the first was interrupted by a signal and needed to be restarted) but also causes us to eventually try and free the error pointer. [ 73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c [ 73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.420065] PGD 8650c067 PUD 870bb067 PMD 0 [ 73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC [ 73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G W 3.17.0-rc6+ #1561 [ 73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011 [ 73.421405] Workqueue: events __i915_mm_struct_free__worker [ 73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000 [ 73.422051] RIP: 0010:[<ffffffff8114af33>] [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.422410] RSP: 0018:ffff88008816bd50 EFLAGS: 00010286 [ 73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000 [ 73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044 [ 73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000 [ 73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044 [ 73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040 [ 73.424682] FS: 0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000 [ 73.425099] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0 [ 73.426157] Stack: [ 73.426597] ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008 [ 73.427096] ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28 [ 73.427603] ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af [ 73.428119] Call Trace: [ 73.428606] [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80 [ 73.429116] [<ffffffff8106321a>] process_one_work+0x1ba/0x610 [ 73.429632] [<ffffffff810631af>] ? process_one_work+0x14f/0x610 [ 73.430153] [<ffffffff810636db>] worker_thread+0x6b/0x4a0 [ 73.430671] [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10 [ 73.431501] [<ffffffff81063670>] ? process_one_work+0x610/0x610 [ 73.432030] [<ffffffff8106a206>] kthread+0xf6/0x110 [ 73.432561] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.433100] [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0 [ 73.433644] [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80 [ 73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65 [ 73.435942] RIP [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130 [ 73.437017] RSP <ffff88008816bd50> [ 73.437704] CR2: 000000000000004c Fixes regression from commit ad46cb53 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Aug 7 14:20:40 2014 +0100 drm/i915: Prevent recursive deadlock on releasing a busy userptr Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84207 Testcase: igt/gem_render_copy_redux Testcase: igt/gem_userptr_blits/create-destroy-sync Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 08 9月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
During release of the GEM object we hold the struct_mutex. As the object may be holding onto the last reference for the task->mm, calling mmput() may trigger exit_mmap() which close the vma which will call drm_gem_vm_close() and attempt to reacquire the struct_mutex. In order to avoid that recursion, we have to defer the mmput() until after we drop the struct_mutex, i.e. we need to schedule a worker to do the clean up. A further issue spotted by Tvrtko was caused when we took a GTT mmapping of a userptr buffer object. In that case, we would never call mmput as the object would be cyclically referenced by the GTT mmapping and not freed upon process exit - keeping the entire process mm alive after the process task was reaped. The fix employed is to replace the mm_users/mmput() reference handling to mm_count/mmdrop() for the shared i915_mm_struct. INFO: task test_surfaces:1632 blocked for more than 120 seconds. Tainted: GF O 3.14.5+ #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. test_surfaces D 0000000000000000 0 1632 1590 0x00000082 ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010 0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010 ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824 Call Trace: [<ffffffff81582499>] schedule+0x29/0x70 [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10 [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220 [<ffffffff81583c53>] mutex_lock+0x23/0x40 [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm] [<ffffffff8115a483>] remove_vma+0x33/0x70 [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170 [<ffffffff8104d6eb>] mmput+0x6b/0x100 [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915] [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915] [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm] [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm] [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm] [<ffffffff8127ffeb>] idr_for_each+0xab/0x100 [<ffffffffa005cc70>] ? drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm] [<ffffffff81583c46>] ? mutex_lock+0x16/0x40 [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm] [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm] [<ffffffff8118d482>] __fput+0xb2/0x260 [<ffffffff8118d6de>] ____fput+0xe/0x10 [<ffffffff8106f27f>] task_work_run+0x8f/0xf0 [<ffffffff81052228>] do_exit+0x1a8/0x480 [<ffffffff81052551>] do_group_exit+0x51/0xc0 [<ffffffff810525d7>] SyS_exit_group+0x17/0x20 [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b v2: Incorporate feedback from Tvrtko and remove the unnessary mm referencing when creating the i915_mm_struct and improve some of the function names and comments. Reported-by: NJacek Danecki <jacek.danecki@intel.com> Test-case: igt/gem_userptr_blits/process-exit* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Tested-by: N"Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Jacek Danecki <jacek.danecki@intel.com> Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Reviewed-by: N"Ursulin, Tvrtko" <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 25 7月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
Move the code around in order to acquire and release the spinlock in the same function and in the same block. This keeps static analysers happy and the reader sane. Suggested-by: NJulia Lawall <julia.lawall@lip6.fr> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 7月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
Whilst I strongly advise against doing so for the implicit coherency issues between the multiple buffer objects accessing the same backing store, it nevertheless is a valid use case, akin to mmaping the same file multiple times. The reason why we forbade it earlier was that our use of the interval tree for fast invalidation upon vma changes excluded overlapping objects. So in the case where the user wishes to create such pairs of overlapping objects, we degrade the range invalidation to walkin the linear list of objects associated with the mm. A situation where overlapping objects could arise is the lax implementation of MIT-SHM Pixmaps in the xserver. A second situation is where the user wishes to have different access modes to a region of memory (e.g. access through a read-only userptr buffer and through a normal userptr buffer). v2: Compile for mmu-notifiers after tweaking v3: Rename is_linear/has_linear Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: "Li, Victor Y" <victor.y.li@intel.com> Cc: "Kelley, Sean V" <sean.v.kelley@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 7月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
During the range invalidate, we walk the list of buffers associated with the mmu_notifer and find the ones that overlap the range. An optimisation is made to speed up the iteration by assuming the previous iter is still valid whilst the tree is unmodified. This exposes a bug when a range invalidate is triggered after we have just created the mmu_notifier, but before attaching any buffers. In that case, we presume we have an unmodified list and start walking from the last iter which is NULL. Oops. The easiest fix is then to initialise the serial of the tree to 1. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Testecase: igt/gem_userptr_blts/stress-mm Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 17 5月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
By exporting the ability to map user address and inserting PTEs representing their backing pages into the GTT, we can exploit UMA in order to utilize normal application data as a texture source or even as a render target (depending upon the capabilities of the chipset). This has a number of uses, with zero-copy downloads to the GPU and efficient readback making the intermixed streaming of CPU and GPU operations fairly efficient. This ability has many widespread implications from faster rendering of client-side software rasterisers (chromium), mitigation of stalls due to read back (firefox) and to faster pipelining of texture data (such as pixel buffer objects in GL or data blobs in CL). v2: Compile with CONFIG_MMU_NOTIFIER v3: We can sleep while performing invalidate-range, which we can utilise to drop our page references prior to the kernel manipulating the vma (for either discard or cloning) and so protect normal users. v4: Only run the invalidate notifier if the range intercepts the bo. v5: Prevent userspace from attempting to GTT mmap non-page aligned buffers v6: Recheck after reacquire mutex for lost mmu. v7: Fix implicit padding of ioctl struct by rounding to next 64bit boundary. v8: Fix rebasing error after forwarding porting the back port. v9: Limit the userptr to page aligned entries. We now expect userspace to handle all the offset-in-page adjustments itself. v10: Prevent vma from being copied across fork to avoid issues with cow. v11: Drop vma behaviour changes -- locking is nigh on impossible. Use a worker to load user pages to avoid lock inversions. v12: Use get_task_mm()/mmput() for correct refcounting of mm. v13: Use a worker to release the mmu_notifier to avoid lock inversion v14: Decouple mmu_notifier from struct_mutex using a custom mmu_notifer with its own locking and tree of objects for each mm/mmu_notifier. v15: Prevent overlapping userptr objects, and invalidate all objects within the mmu_notifier range v16: Fix a typo for iterating over multiple objects in the range and rearrange error path to destroy the mmu_notifier locklessly. Also close a race between invalidate_range and the get_pages_worker. v17: Close a race between get_pages_worker/invalidate_range and fresh allocations of the same userptr range - and notice that struct_mutex was presumed to be held when during creation it wasn't. v18: Sigh. Fix the refactor of st_set_pages() to allocate enough memory for the struct sg_table and to clear it before reporting an error. v19: Always error out on read-only userptr requests as we don't have the hardware infrastructure to support them at the moment. v20: Refuse to implement read-only support until we have the required infrastructure - but reserve the bit in flags for future use. v21: use_mm() is not required for get_user_pages(). It is only meant to be used to fix up the kernel thread's current->mm for use with copy_user(). v22: Use sg_alloc_table_from_pages for that chunky feeling v23: Export a function for sanity checking dma-buf rather than encode userptr details elsewhere, and clean up comments based on suggestions by Bradley. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com> Cc: Akash Goel <akash.goel@intel.com> Cc: "Volkin, Bradley D" <bradley.d.volkin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Reviewed-by: NBrad Volkin <bradley.d.volkin@intel.com> [danvet: Frob ioctl allocation to pick the next one - will cause a bit of fuss with create2 apparently, but such are the rules.] [danvet2: oops, forgot to git add after manual patch application] [danvet3: Appease sparse.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-