- 04 5月, 2016 2 次提交
-
-
由 Daniel Vetter 提交于
Embarrassingly while fixing up the old paths for i915 I managed to misplace a locking check for the new _unlocked paths. That's what I get for not retesting on radeon. Fixes: 9f0ba539 ("drm/gem: support BO freeing without dev->struct_mutex") Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Alex Deucher <alexdeucher@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Tested-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
Finally all the core gem and a lot of drivers are entirely free of dev->struct_mutex depencies, and we can start to have an entirely lockless unref path. To make sure that no one who touches the core code accidentally breaks existing drivers which still require dev->struct_mutex I've made the might_lock check unconditional. While at it de-inline the ref/unref functions, they've become a bit too big. v2: Make it not leak like a sieve. v3: Review from Lucas: - drop != NULL in pointer checks. - fixup copypasted kerneldoc to actually match the functions. v4: Add __drm_gem_object_unreference as a fastpath helper for drivers who abolished dev->struct_mutex, requested by Chris. v5: Fix silly mistake in drm_gem_object_unreference_unlocked caught by intel-gfx CI - I checked for gem_free_object instead of gem_free_object_unlocked ... Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Alex Deucher <alexdeucher@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> (v3) Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v4) Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1462178451-1765-1-git-send-email-daniel.vetter@ffwll.ch
-
- 20 4月, 2016 1 次提交
-
-
由 Daniel Vetter 提交于
It's racy, creating mmap offsets is a slowpath, so better to remove it to avoid drivers doing broken things. The only user is i915, and it's ok there because everything (well almost) is protected by dev->struct_mutex in i915-gem. While at it add a note in the create_mmap_offset kerneldoc that drivers must release it again. And then I also noticed that drm_gem_object_release entirely lacks kerneldoc. Cc: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459330852-27668-14-git-send-email-daniel.vetter@ffwll.ch
-
- 15 4月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
When userspace closes a handle, we remove it from the file->object_idr and then tell the driver to drop its references to that file/handle. However, as the file/handle is already available again for reuse, it may be reallocated back to userspace and active on a new object before the driver has had a chance to drop the old file/handle references. Whilst calling back into the driver, we have to drop the file->table_lock spinlock and so to prevent reusing the closed handle we mark that handle as stale in the idr, perform the callback and then remove the handle. We set the stale handle to point to the NULL object, then any idr_find() whilst the driver is removing the handle will return NULL, just as if the handle is already removed from idr. Note: This will be used to have a direct handle -> vma lookup table, instead of first a handle -> obj lookup, and then an (obj, vm) -> vma lookup. v2: Use NULL rather than an ERR_PTR to avoid having to adjust callers. idr_alloc() tracks existing handles using an internal bitmap, so we are free to use the NULL object as our stale identifier. v3: Needed to update the return value check after changing from using the stale error pointer to NULL. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: dri-devel@lists.freedesktop.org Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Thierry Reding <treding@nvidia.com> [danvet: Add note about the use-case.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/1460721308-32405-1-git-send-email-chris@chris-wilson.co.uk
-
- 05 4月, 2016 1 次提交
-
-
由 Kirill A. Shutemov 提交于
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 1月, 2016 5 次提交
-
-
由 Chris Wilson 提交于
drm_gem_handle_delete() contains its own version of drm_gem_object_release_handle(), so lets just call the release method instead. Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1451986951-3703-2-git-send-email-chris@chris-wilson.co.ukReviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Good practice dictates that we do not leak stale information to our callers, and should avoid overwriting an outparam on an error path. Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1451986951-3703-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Unlike the handle, the name table uses a sleeping mutex rather than a spinlock. The allocation is in a normal context, and we can use the simpler sleeping gfp_t, rather than have to take from the atomic reserves. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1451902261-25380-3-git-send-email-chris@chris-wilson.co.ukSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
We only need a single reference count for all handles (i.e. non-zero obj->handle_count) and so can trim a few atomic operations by only taking the reference on the first handle and dropping it after the last. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1451902261-25380-2-git-send-email-chris@chris-wilson.co.ukSigned-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
The current error path for failure when establishing a handle for a GEM object is unbalance, e.g. we call object_close() without calling first object_open(). Use the typical onion structure to only undo what has been set up prior to the error. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 11月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
A bunch of things have been removed meanwhile and docs not fully brought up to speed. And a few gaps closed where I noticed missing kerneldoc while reading through the overview sections. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1445533889-7661-3-git-send-email-daniel.vetter@ffwll.chReviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 11月, 2015 1 次提交
-
-
由 Michal Hocko 提交于
There are many places which use mapping_gfp_mask to restrict a more generic gfp mask which would be used for allocations which are not directly related to the page cache but they are performed in the same context. Let's introduce a helper function which makes the restriction explicit and easier to track. This patch doesn't introduce any functional changes. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NMichal Hocko <mhocko@suse.com> Suggested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 10月, 2015 2 次提交
-
-
由 Daniel Vetter 提交于
Compared to wrapping the final kref_put with dev->struct_mutex this allows us to only acquire the offset manager look both in the final cleanup and in the lookup. Which has the upside that no locks leak out of the core abstractions. But it means that we need to hold a temporary reference to the object while checking mmap constraints, to make sure the object doesn't disappear. Extended the critical region would have worked too, but would result in more leaky locking. Also, this is the final bit which required dev->struct_mutex in gem core, now modern drivers can be completely struct_mutex free! This needs a new drm_vma_offset_exact_lookup_locked and makes both drm_vma_offset_exact_lookup and drm_vma_offset_lookup unused. v2: Don't leak object references in failure paths (David). v3: Add a comment from Chris explaining how the ordering works, with the slight adjustment that I dropped any mention of struct_mutex since with this patch it's now immaterial ot core gem. Cc: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://mid.gmane.org/1444901623-18918-1-git-send-email-daniel.vetter@ffwll.chSigned-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
Just a random thing I spotted while reading code - better safe than sorry. Link: http://mid.gmane.org/1444894601-5200-11-git-send-email-daniel.vetter@ffwll.chReviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 16 10月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
Since commit 131e663b Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Thu Jul 9 23:32:33 2015 +0200 drm/gem: rip out drm vma accounting for gem mmaps there is no need for this any more. v2: Fixup compile noise spotted by 0-day build. Link: http://mid.gmane.org/1444894601-5200-9-git-send-email-daniel.vetter@ffwll.chReviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 10 8月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
BUG_ON kills the driver, WARN_ON is much friendlier. And usually nothing bad happens when the locking is slightly busted. v2: Fix typos in commit message Thierry spotted. Reviewed-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 14 7月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
Doesn't really add anything which can't be figured out through proc files. And more clearly separates the new gem mmap handling code from the old drm maps mmap handling code, which is surely a good thing. Cc: Martin Peres <martin.peres@free.fr> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 11月, 2014 2 次提交
-
-
由 Thierry Reding 提交于
While at it, adjust the drm_gem_handle_create() function declaration to be more consistent with other functions in the file. Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NThierry Reding <treding@nvidia.com>
-
由 Thierry Reding 提交于
The function being documented is drm_gem_object_handle_free(), not drm_gem_object_free(). Signed-off-by: NThierry Reding <treding@nvidia.com>
-
- 03 10月, 2014 1 次提交
-
-
由 Andrzej Hajda 提交于
The patch replaces direct access to driver_features field by calls to helper function. Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 9月, 2014 2 次提交
-
-
由 Daniel Vetter 提交于
v2: Don't forget git add, noticed by David. Cc: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Acked-by: NDavid Herrmann <dh.herrmann@gmail.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Daniel Vetter 提交于
The only user I could dig out was i915 back when ums+gem was still a thing. But we've just very much killed that, and even when someone screams about that we should resurrect that with a special hack (wrapping drm_gem_mmap) in i915, not in the core code. So good riddance to another entry point of the legacy buffer mapping code. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 15 9月, 2014 1 次提交
-
-
由 Laurent Pinchart 提交于
The drm_gem_private_object_init function is called drm_gem_object_init in its kerneldoc. Fix it. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 9月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
This way drivers can't grow crazy ideas any more, and it also helps a bit in reviewing EXPORT_SYMBOLS. v2: Even more stuff. Unfortunately we can't move drm_vm_open_locked because exynos does some horrible stuff with it. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 7月, 2014 1 次提交
-
-
由 David Herrmann 提交于
drm_gem_get_pages() currently allows passing a 'gfp' parameter that is passed to shmem combined with mapping_gfp_mask(). Given that the default mapping_gfp_mask() is GFP_HIGHUSER, it is _very_ unlikely that anyone will ever make use of that parameter. In fact, all drivers currently pass redundant flags or 0. This patch removes the 'gfp' parameter. The only reason to keep it is to remove flags like __GFP_WAIT. But in its current form, it can only be used to add flags. So to remove __GFP_WAIT, you'd have to drop it from the mapping_gfp_mask, which again is stupid as this mask is used by shmem-core for other allocations, too. If any driver ever requires that parameter, we can introduce a new helper that takes the raw 'gfp' parameter. The caller'd be responsible to combine it with mapping_gfp_mask() in a suitable way. The current drm_gem_get_pages() helper would then simply use mapping_gfp_mask() and call the new helper. This is what shmem_read_mapping_pages{_gfp,} does right now. Moreover, the gfp-zone flag-usage is not obvious: If you pass a modified zone, shmem core will WARN() or even BUG(). In other words, the following must be true for 'gfp' passed to shmem_read_mapping_pages_gfp(): gfp_zone(mapping_gfp_mask(mapping)) == gfp_zone(gfp) Add a comment to drm_gem_read_pages() explaining that constraint. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
-
- 27 5月, 2014 1 次提交
-
-
由 David Herrmann 提交于
shmem supports page-relocations during swapin since quite some time. It was implemented in: commit bde05d1c Author: Hugh Dickins <hughd@google.com> Date: Tue May 29 15:06:38 2012 -0700 shmem: replace page if mapping excludes its zone The gem-comment about wrongly placed DMA32 pages is no longer valid. Replace it with a proper comment but keep the BUG_ON() to verify correct shmem behavior. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 16 3月, 2014 3 次提交
-
-
由 David Herrmann 提交于
There is no need to initialize this variable, so drop it. Otherwise, the compiler won't warn if we use it unintialized. Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
-
由 David Herrmann 提交于
All drivers currently need to clean up the vma-node manually. There is no fancy logic involved so lets just clean it up unconditionally. The vma-manager correctly catches multiple calls so we are fine. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
-
由 David Herrmann 提交于
Remove double-whitespace and wrong indentation. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
Fairly incomplete, but at least a start. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 21 1月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
At least drm/i915 expects that the obj->dev pointer is set even in failure paths. Specifically when the shmem initialization fails we call i915_gem_object_free which needs to deref obj->base.dev to get at the slab pointer in the device private structure. And the shmem allocation can easily fail when userspace is hitting open file limits. Doing the structure init even when the shmem file allocation fails prevents this Oops. This is a regression from commit 89c8233f Author: David Herrmann <dh.herrmann@gmail.com> Date: Thu Jul 11 11:56:32 2013 +0200 drm/gem: simplify object initialization v2: Add regression note which Chris supplied. Testcase: igt/gem_fd_exhaustion Reported-and-Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> References: http://lists.freedesktop.org/archives/intel-gfx/2014-January/038433.html Cc: stable@vger.kernel.org Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Cc: David Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 1月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
This was hidden in a generic void * dev->mm_private. But only ever used for gem. But thanks to this fake generic pretension no one noticed that Rob's drm drivers are now all broken. So just give the offset manager a type pointer and fix up msm, omapdrm and tilcdc. v2: Fixup compile fail. v3: Fixup rebase fail that David spotted. Cc: David Herrmann <dh.herrmann@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 18 12月, 2013 1 次提交
-
-
由 Kristian Hogsberg 提交于
There's no reason to keep a reference to objects in the name idr. Each handle to an object has a reference to the object and just before we destroy the last handle we take the object out of the name idr. Thus, if an object is in the name idr, there's at least one reference to the object. Or to put it another way, the name idr reference will never keep the object alive. It just looks like it, which is confusing. Signed-off-by: NKristian Høgsberg <krh@bitplanet.net> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 09 10月, 2013 1 次提交
-
-
由 David Herrmann 提交于
All drivers embed gem-objects into their own buffer objects. There is no reason to keep drm_gem_object_alloc(), gem->driver_private and ->gem_init_object() anymore. New drivers are highly encouraged to do the same. There is no benefit in allocating gem-objects separately. Cc: Dave Airlie <airlied@gmail.com> Cc: Alex Deucher <alexdeucher@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Inki Dae <inki.dae@samsung.com> Cc: Ben Skeggs <skeggsb@gmail.com> Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 30 8月, 2013 1 次提交
-
-
由 Thierry Reding 提交于
Drivers that don't support PRIME will not have initialized the PRIME specific private component of struct drm_file. If called for such drivers, the drm_gem_remove_prime_handles() function will crash. Fix it by checking for PRIME support prior to removing the PRIME handles. Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NDave Airlie <airlied@gmail.com>
-
- 27 8月, 2013 2 次提交
-
-
由 David Herrmann 提交于
We implement automatic vma mmap() access management for all drivers using gem_mmap. We use the vma manager to add each open-file that creates a gem-handle to the vma-node of the underlying gem object. Once the handle is destroyed, we drop the open-file again. This allows us to use drm_vma_node_is_allowed() on _any_ gem object to see whether an open-file is granted access. In drm_gem_mmap() we use this to verify that unprivileged users cannot guess gem offsets and map arbitrary buffers. Note that this manages access for _all_ gem users (also TTM+GEM), but the actual access checks are only done for drm_gem_mmap(). TTM drivers use the TTM mmap helpers, which need to do that separately. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
The VMA offset manager uses a device-global address-space. Hence, any user can currently map any offset-node they want. They only need to guess the right offset. If we wanted per open-file offset spaces, we'd either need VM_NONLINEAR mappings or multiple "struct address_space" trees. As both doesn't really scale, we implement access management in the VMA manager itself. We use an rb-tree to store open-files for each VMA node. On each mmap call, GEM, TTM or the drivers must check whether the current user is allowed to map this file. We add a separate lock for each node as there is no generic lock available for the caller to protect the node easily. As we currently don't know whether an object may be used for mmap(), we have to do access management for all objects. If it turns out to slow down handle creation/deletion significantly, we can optimize it in several ways: - Most times only a single filp is added per bo so we could use a static "struct file *main_filp" which is checked/added/removed first before we fall back to the rbtree+drm_vma_offset_file. This could be even done lockless with rcu. - Let user-space pass a hint whether mmap() should be supported on the bo and avoid access-management if not. - .. there are probably more ideas once we have benchmarks .. v2: add drm_vma_node_verify_access() helper Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 21 8月, 2013 3 次提交
-
-
由 Daniel Vetter 提交于
... not only when the dma-buf is freshly created. In contrived examples someone else could have exported/imported the dma-buf already and handed us the gem object with a flink name. If such on object gets reexported as a dma_buf we won't have it in the handle cache already, which breaks the guarantee that for dma-buf imports we always hand back an existing handle if there is one. This is exercised by igt/prime_self_import/with_one_bo_two_files Now if we extend the locked sections just a notch more we can also plug th racy buf/handle cache setup in handle_to_fd: If evil userspace races a concurrent gem close against a prime export operation we can end up tearing down the gem handle before the dma buf handle cache is set up. When handle_to_fd gets around to adding the handle to the cache there will be no one left to clean it up, effectily leaking the bo (and the dma-buf, since the handle cache holds a ref on the dma-buf): Thread A Thread B handle_to_fd: lookup gem object from handle creates new dma_buf gem_close on the same handle obj->dma_buf is set, but file priv buf handle cache has no entry obj->handle_count drops to 0 drm_prime_add_buf_handle sets up the handle cache -> We have a dma-buf reference in the handle cache, but since the handle_count of the gem object already dropped to 0 no on will clean it up. When closing the drm device fd we'll hit the WARN_ON in drm_prime_destroy_file_private. The important change is to extend the critical section of the filp->prime.lock to cover the gem handle lookup. This serializes with a concurrent gem handle close. This leak is exercised by igt/prime_self_import/export-vs-gem_close-race Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Daniel Vetter 提交于
with the reworking semantics and locking of the obj->dma_buf pointer this pointer is always set as long as there's still a gem handle around and a dma_buf associated with this gem object. Also, the per file-priv lookup-cache for dma-buf importing is also unified between foreign and native objects. Hence we don't need to special case the clean any more and can simply drop the clause which only runs for foreing objects, i.e. with obj->import_attach set. Note that with this change (actually with the previous one to always set up obj->dma_buf even for foreign objects) it is no longer required to set obj->import_attach when importing a foreing object. So update comments accordingly, too. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Daniel Vetter 提交于
The export dma-buf cache is semantically similar to an flink name. So semantically it makes sense to treat it the same and remove the name (i.e. the dma_buf pointer) and its references when the last gem handle disappears. Again we need to be careful, but double so: Not just could someone race and export with a gem close ioctl (so we need to recheck obj->handle_count again when assigning the new name), but multiple exports can also race against each another. This is prevented by holding the dev->object_name_lock across the entire section which touches obj->dma_buf. With the new scheme we also need to reinstate the obj->dma_buf link at import time (in case the only reference userspace has held in-between was through the dma-buf fd and not through any native gem handle). For simplicity we don't check whether it's a native object but unconditionally set up that link - with the new scheme of removing the obj->dma_buf reference when the last handle disappears we can do that. To make it clear that this is not just for exported buffers anymore als rename it from export_dma_buf to dma_buf. To make sure that now one can race a fd_to_handle or handle_to_fd with gem_close we use the same tricks as in flink of extending the dev->object_name_locking critical section. With this change we finally have a guaranteed 1:1 relationship (at least for native objects) between gem objects and dma-bufs, even accounting for races (which can happen since the dma-buf itself holds a reference while in-flight). This prevent igt/prime_self_import/export-vs-gem_close-race from Oopsing the kernel. There is still a leak though since the per-file priv dma-buf/handle cache handling is racy. That will be fixed in a later patch. v2: Remove the bogus dma_buf_put from the export_and_register_object failure path if we've raced with the handle count dropping to 0. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-