1. 04 5月, 2016 2 次提交
  2. 20 4月, 2016 1 次提交
  3. 15 4月, 2016 1 次提交
    • C
      drm: Release driver references to handle before making it available again · f6cd7dae
      Chris Wilson 提交于
      When userspace closes a handle, we remove it from the file->object_idr
      and then tell the driver to drop its references to that file/handle.
      However, as the file/handle is already available again for reuse, it may
      be reallocated back to userspace and active on a new object before the
      driver has had a chance to drop the old file/handle references.
      
      Whilst calling back into the driver, we have to drop the
      file->table_lock spinlock and so to prevent reusing the closed handle we
      mark that handle as stale in the idr, perform the callback and then
      remove the handle. We set the stale handle to point to the NULL object,
      then any idr_find() whilst the driver is removing the handle will return
      NULL, just as if the handle is already removed from idr.
      
      Note: This will be used to have a direct handle -> vma lookup table,
      instead of first a handle -> obj lookup, and then an (obj, vm) -> vma
      lookup.
      
      v2: Use NULL rather than an ERR_PTR to avoid having to adjust callers.
      idr_alloc() tracks existing handles using an internal bitmap, so we are
      free to use the NULL object as our stale identifier.
      v3: Needed to update the return value check after changing from using
      the stale error pointer to NULL.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: dri-devel@lists.freedesktop.org
      Cc: David Airlie <airlied@linux.ie>
      Cc: Daniel Vetter <daniel.vetter@intel.com>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Thierry Reding <treding@nvidia.com>
      [danvet: Add note about the use-case.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1460721308-32405-1-git-send-email-chris@chris-wilson.co.uk
      f6cd7dae
  4. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  5. 05 1月, 2016 5 次提交
  6. 24 11月, 2015 1 次提交
  7. 07 11月, 2015 1 次提交
  8. 19 10月, 2015 2 次提交
  9. 16 10月, 2015 1 次提交
  10. 10 8月, 2015 1 次提交
  11. 14 7月, 2015 1 次提交
  12. 13 11月, 2014 2 次提交
  13. 03 10月, 2014 1 次提交
  14. 24 9月, 2014 2 次提交
  15. 15 9月, 2014 1 次提交
  16. 12 9月, 2014 1 次提交
  17. 08 7月, 2014 1 次提交
    • D
      drm/gem: remove misleading gfp parameter to get_pages() · 0cdbe8ac
      David Herrmann 提交于
      drm_gem_get_pages() currently allows passing a 'gfp' parameter that is
      passed to shmem combined with mapping_gfp_mask(). Given that the default
      mapping_gfp_mask() is GFP_HIGHUSER, it is _very_ unlikely that anyone will
      ever make use of that parameter. In fact, all drivers currently pass
      redundant flags or 0.
      
      This patch removes the 'gfp' parameter. The only reason to keep it is to
      remove flags like __GFP_WAIT. But in its current form, it can only be used
      to add flags. So to remove __GFP_WAIT, you'd have to drop it from the
      mapping_gfp_mask, which again is stupid as this mask is used by shmem-core
      for other allocations, too.
      
      If any driver ever requires that parameter, we can introduce a new helper
      that takes the raw 'gfp' parameter. The caller'd be responsible to combine
      it with mapping_gfp_mask() in a suitable way. The current
      drm_gem_get_pages() helper would then simply use mapping_gfp_mask() and
      call the new helper. This is what shmem_read_mapping_pages{_gfp,} does
      right now.
      
      Moreover, the gfp-zone flag-usage is not obvious: If you pass a modified
      zone, shmem core will WARN() or even BUG(). In other words, the following
      must be true for 'gfp' passed to shmem_read_mapping_pages_gfp():
          gfp_zone(mapping_gfp_mask(mapping)) == gfp_zone(gfp)
      Add a comment to drm_gem_read_pages() explaining that constraint.
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      0cdbe8ac
  18. 27 5月, 2014 1 次提交
  19. 16 3月, 2014 3 次提交
  20. 13 3月, 2014 1 次提交
  21. 21 1月, 2014 1 次提交
  22. 14 1月, 2014 1 次提交
  23. 18 12月, 2013 1 次提交
  24. 09 10月, 2013 1 次提交
    • D
      drm: kill ->gem_init_object() and friends · 16eb5f43
      David Herrmann 提交于
      All drivers embed gem-objects into their own buffer objects. There is no
      reason to keep drm_gem_object_alloc(), gem->driver_private and
      ->gem_init_object() anymore.
      
      New drivers are highly encouraged to do the same. There is no benefit in
      allocating gem-objects separately.
      
      Cc: Dave Airlie <airlied@gmail.com>
      Cc: Alex Deucher <alexdeucher@gmail.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Inki Dae <inki.dae@samsung.com>
      Cc: Ben Skeggs <skeggsb@gmail.com>
      Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Acked-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      16eb5f43
  25. 30 8月, 2013 1 次提交
  26. 27 8月, 2013 2 次提交
    • D
      drm/gem: implement vma access management · ca481c9b
      David Herrmann 提交于
      We implement automatic vma mmap() access management for all drivers using
      gem_mmap. We use the vma manager to add each open-file that creates a
      gem-handle to the vma-node of the underlying gem object. Once the handle
      is destroyed, we drop the open-file again.
      
      This allows us to use drm_vma_node_is_allowed() on _any_ gem object to see
      whether an open-file is granted access. In drm_gem_mmap() we use this to
      verify that unprivileged users cannot guess gem offsets and map arbitrary
      buffers.
      
      Note that this manages access for _all_ gem users (also TTM+GEM), but the
      actual access checks are only done for drm_gem_mmap(). TTM drivers use the
      TTM mmap helpers, which need to do that separately.
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      ca481c9b
    • D
      drm/vma: add access management helpers · 88d7ebe5
      David Herrmann 提交于
      The VMA offset manager uses a device-global address-space. Hence, any
      user can currently map any offset-node they want. They only need to guess
      the right offset. If we wanted per open-file offset spaces, we'd either
      need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
      both doesn't really scale, we implement access management in the VMA
      manager itself.
      
      We use an rb-tree to store open-files for each VMA node. On each mmap
      call, GEM, TTM or the drivers must check whether the current user is
      allowed to map this file.
      
      We add a separate lock for each node as there is no generic lock available
      for the caller to protect the node easily.
      
      As we currently don't know whether an object may be used for mmap(), we
      have to do access management for all objects. If it turns out to slow down
      handle creation/deletion significantly, we can optimize it in several
      ways:
       - Most times only a single filp is added per bo so we could use a static
         "struct file *main_filp" which is checked/added/removed first before we
         fall back to the rbtree+drm_vma_offset_file.
         This could be even done lockless with rcu.
       - Let user-space pass a hint whether mmap() should be supported on the
         bo and avoid access-management if not.
       - .. there are probably more ideas once we have benchmarks ..
      
      v2: add drm_vma_node_verify_access() helper
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      88d7ebe5
  27. 21 8月, 2013 3 次提交
    • D
      drm/prime: Always add exported buffers to the handle cache · d0b2c533
      Daniel Vetter 提交于
      ... not only when the dma-buf is freshly created. In contrived
      examples someone else could have exported/imported the dma-buf already
      and handed us the gem object with a flink name. If such on object gets
      reexported as a dma_buf we won't have it in the handle cache already,
      which breaks the guarantee that for dma-buf imports we always hand
      back an existing handle if there is one.
      
      This is exercised by igt/prime_self_import/with_one_bo_two_files
      
      Now if we extend the locked sections just a notch more we can also
      plug th racy buf/handle cache setup in handle_to_fd:
      
      If evil userspace races a concurrent gem close against a prime export
      operation we can end up tearing down the gem handle before the dma buf
      handle cache is set up. When handle_to_fd gets around to adding the
      handle to the cache there will be no one left to clean it up,
      effectily leaking the bo (and the dma-buf, since the handle cache
      holds a ref on the dma-buf):
      
      Thread A			Thread B
      
      handle_to_fd:
      
      lookup gem object from handle
      creates new dma_buf
      
      				gem_close on the same handle
      				obj->dma_buf is set, but file priv buf
      				handle cache has no entry
      
      				obj->handle_count drops to 0
      
      drm_prime_add_buf_handle sets up the handle cache
      
      -> We have a dma-buf reference in the handle cache, but since the
      handle_count of the gem object already dropped to 0 no on will clean
      it up. When closing the drm device fd we'll hit the WARN_ON in
      drm_prime_destroy_file_private.
      
      The important change is to extend the critical section of the
      filp->prime.lock to cover the gem handle lookup. This serializes with
      a concurrent gem handle close.
      
      This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      d0b2c533
    • D
      drm/prime: Simplify drm_gem_remove_prime_handles · 838cd445
      Daniel Vetter 提交于
      with the reworking semantics and locking of the obj->dma_buf pointer
      this pointer is always set as long as there's still a gem handle
      around and a dma_buf associated with this gem object.
      
      Also, the per file-priv lookup-cache for dma-buf importing is also
      unified between foreign and native objects.
      
      Hence we don't need to special case the clean any more and can simply
      drop the clause which only runs for foreing objects, i.e. with
      obj->import_attach set.
      
      Note that with this change (actually with the previous one to always
      set up obj->dma_buf even for foreign objects) it is no longer required
      to set obj->import_attach when importing a foreing object. So update
      comments accordingly, too.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      838cd445
    • D
      drm/prime: proper locking+refcounting for obj->dma_buf link · 319c933c
      Daniel Vetter 提交于
      The export dma-buf cache is semantically similar to an flink name. So
      semantically it makes sense to treat it the same and remove the name
      (i.e. the dma_buf pointer) and its references when the last gem handle
      disappears.
      
      Again we need to be careful, but double so: Not just could someone
      race and export with a gem close ioctl (so we need to recheck
      obj->handle_count again when assigning the new name), but multiple
      exports can also race against each another. This is prevented by
      holding the dev->object_name_lock across the entire section which
      touches obj->dma_buf.
      
      With the new scheme we also need to reinstate the obj->dma_buf link at
      import time (in case the only reference userspace has held in-between
      was through the dma-buf fd and not through any native gem handle). For
      simplicity we don't check whether it's a native object but
      unconditionally set up that link - with the new scheme of removing the
      obj->dma_buf reference when the last handle disappears we can do that.
      
      To make it clear that this is not just for exported buffers anymore
      als rename it from export_dma_buf to dma_buf.
      
      To make sure that now one can race a fd_to_handle or handle_to_fd with
      gem_close we use the same tricks as in flink of extending the
      dev->object_name_locking critical section. With this change we finally
      have a guaranteed 1:1 relationship (at least for native objects)
      between gem objects and dma-bufs, even accounting for races (which can
      happen since the dma-buf itself holds a reference while in-flight).
      
      This prevent igt/prime_self_import/export-vs-gem_close-race from
      Oopsing the kernel. There is still a leak though since the per-file
      priv dma-buf/handle cache handling is racy. That will be fixed in a
      later patch.
      
      v2: Remove the bogus dma_buf_put from the export_and_register_object
      failure path if we've raced with the handle count dropping to 0.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      319c933c