1. 01 10月, 2013 10 次提交
  2. 20 9月, 2013 1 次提交
  3. 11 9月, 2013 1 次提交
  4. 05 9月, 2013 1 次提交
  5. 04 9月, 2013 2 次提交
  6. 02 9月, 2013 1 次提交
  7. 31 8月, 2013 2 次提交
  8. 30 8月, 2013 6 次提交
  9. 29 8月, 2013 1 次提交
  10. 27 8月, 2013 1 次提交
    • D
      drm/vma: add access management helpers · 88d7ebe5
      David Herrmann 提交于
      The VMA offset manager uses a device-global address-space. Hence, any
      user can currently map any offset-node they want. They only need to guess
      the right offset. If we wanted per open-file offset spaces, we'd either
      need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
      both doesn't really scale, we implement access management in the VMA
      manager itself.
      
      We use an rb-tree to store open-files for each VMA node. On each mmap
      call, GEM, TTM or the drivers must check whether the current user is
      allowed to map this file.
      
      We add a separate lock for each node as there is no generic lock available
      for the caller to protect the node easily.
      
      As we currently don't know whether an object may be used for mmap(), we
      have to do access management for all objects. If it turns out to slow down
      handle creation/deletion significantly, we can optimize it in several
      ways:
       - Most times only a single filp is added per bo so we could use a static
         "struct file *main_filp" which is checked/added/removed first before we
         fall back to the rbtree+drm_vma_offset_file.
         This could be even done lockless with rcu.
       - Let user-space pass a hint whether mmap() should be supported on the
         bo and avoid access-management if not.
       - .. there are probably more ideas once we have benchmarks ..
      
      v2: add drm_vma_node_verify_access() helper
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      88d7ebe5
  11. 21 8月, 2013 14 次提交
    • D
      drm/prime: Always add exported buffers to the handle cache · d0b2c533
      Daniel Vetter 提交于
      ... not only when the dma-buf is freshly created. In contrived
      examples someone else could have exported/imported the dma-buf already
      and handed us the gem object with a flink name. If such on object gets
      reexported as a dma_buf we won't have it in the handle cache already,
      which breaks the guarantee that for dma-buf imports we always hand
      back an existing handle if there is one.
      
      This is exercised by igt/prime_self_import/with_one_bo_two_files
      
      Now if we extend the locked sections just a notch more we can also
      plug th racy buf/handle cache setup in handle_to_fd:
      
      If evil userspace races a concurrent gem close against a prime export
      operation we can end up tearing down the gem handle before the dma buf
      handle cache is set up. When handle_to_fd gets around to adding the
      handle to the cache there will be no one left to clean it up,
      effectily leaking the bo (and the dma-buf, since the handle cache
      holds a ref on the dma-buf):
      
      Thread A			Thread B
      
      handle_to_fd:
      
      lookup gem object from handle
      creates new dma_buf
      
      				gem_close on the same handle
      				obj->dma_buf is set, but file priv buf
      				handle cache has no entry
      
      				obj->handle_count drops to 0
      
      drm_prime_add_buf_handle sets up the handle cache
      
      -> We have a dma-buf reference in the handle cache, but since the
      handle_count of the gem object already dropped to 0 no on will clean
      it up. When closing the drm device fd we'll hit the WARN_ON in
      drm_prime_destroy_file_private.
      
      The important change is to extend the critical section of the
      filp->prime.lock to cover the gem handle lookup. This serializes with
      a concurrent gem handle close.
      
      This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      d0b2c533
    • D
      drm/prime: make drm_prime_lookup_buf_handle static · de9564d8
      Daniel Vetter 提交于
      ... and move it to the top of the function to avoid a forward
      declaration.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      de9564d8
    • D
      drm/prime: Simplify drm_gem_remove_prime_handles · 838cd445
      Daniel Vetter 提交于
      with the reworking semantics and locking of the obj->dma_buf pointer
      this pointer is always set as long as there's still a gem handle
      around and a dma_buf associated with this gem object.
      
      Also, the per file-priv lookup-cache for dma-buf importing is also
      unified between foreign and native objects.
      
      Hence we don't need to special case the clean any more and can simply
      drop the clause which only runs for foreing objects, i.e. with
      obj->import_attach set.
      
      Note that with this change (actually with the previous one to always
      set up obj->dma_buf even for foreign objects) it is no longer required
      to set obj->import_attach when importing a foreing object. So update
      comments accordingly, too.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      838cd445
    • D
      drm/prime: proper locking+refcounting for obj->dma_buf link · 319c933c
      Daniel Vetter 提交于
      The export dma-buf cache is semantically similar to an flink name. So
      semantically it makes sense to treat it the same and remove the name
      (i.e. the dma_buf pointer) and its references when the last gem handle
      disappears.
      
      Again we need to be careful, but double so: Not just could someone
      race and export with a gem close ioctl (so we need to recheck
      obj->handle_count again when assigning the new name), but multiple
      exports can also race against each another. This is prevented by
      holding the dev->object_name_lock across the entire section which
      touches obj->dma_buf.
      
      With the new scheme we also need to reinstate the obj->dma_buf link at
      import time (in case the only reference userspace has held in-between
      was through the dma-buf fd and not through any native gem handle). For
      simplicity we don't check whether it's a native object but
      unconditionally set up that link - with the new scheme of removing the
      obj->dma_buf reference when the last handle disappears we can do that.
      
      To make it clear that this is not just for exported buffers anymore
      als rename it from export_dma_buf to dma_buf.
      
      To make sure that now one can race a fd_to_handle or handle_to_fd with
      gem_close we use the same tricks as in flink of extending the
      dev->object_name_locking critical section. With this change we finally
      have a guaranteed 1:1 relationship (at least for native objects)
      between gem objects and dma-bufs, even accounting for races (which can
      happen since the dma-buf itself holds a reference while in-flight).
      
      This prevent igt/prime_self_import/export-vs-gem_close-race from
      Oopsing the kernel. There is still a leak though since the per-file
      priv dma-buf/handle cache handling is racy. That will be fixed in a
      later patch.
      
      v2: Remove the bogus dma_buf_put from the export_and_register_object
      failure path if we've raced with the handle count dropping to 0.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      319c933c
    • D
      drm/gem: completely close gem_open vs. gem_close races · 20228c44
      Daniel Vetter 提交于
      The gem flink name holds a reference onto the object itself, and this
      self-reference would prevent an flink'ed object from every being
      freed. To break that loop we remove the flink name when the last
      userspace handle disappears, i.e. when obj->handle_count reaches 0.
      
      Now in gem_open we drop the dev->object_name_lock between the flink
      name lookup and actually adding the handle. This means a concurrent
      gem_close of the last handle could result in the flink name getting
      reaped right inbetween, i.e.
      
      Thread 1		Thread 2
      gem_open		gem_close
      
      flink -> obj lookup
      			handle_count drops to 0
      			remove flink name
      create_handle
      handle_count++
      
      If someone now flinks this object again, we'll get a new flink name.
      
      We can close this race by removing the lock dropping and making the
      entire lookup+handle_create sequence atomic. Unfortunately to still be
      able to share the handle_create logic this requires a
      handle_create_tail function which drops the lock - we can't hold the
      object_name_lock while calling into a driver's ->gem_open callback.
      
      Note that for flink fixing this race isn't really important, since
      racing gem_open against gem_close is clearly a userspace bug. And no
      matter how the race ends, we won't leak any references.
      
      But with dma-buf where the userspace dma-buf fd itself is refcounted
      this is a valid sequence and hence we should fix it. Therefore this
      patch here is just a warm-up exercise (and for consistency between
      flink buffer sharing and dma-buf buffer sharing with self-imports).
      
      Also note that this extension of the critical section in gem_open
      protected by dev->object_name_lock only works because it's now a
      mutex: A spinlock would conflict with the potential memory allocation
      in idr_preload().
      
      This is exercises by igt/gem_flink_race/flink_name.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      20228c44
    • D
      drm/gem: switch dev->object_name_lock to a mutex · cd4f013f
      Daniel Vetter 提交于
      I want to wrap the creation of a dma-buf from a gem object in it,
      so that the obj->export_dma_buf cache can be atomically filled in.
      
      Instead of creating a new mutex just for that variable I've figured
      I can reuse the existing dev->object_name_lock, especially since
      the new semantics will exactly mirror the flink obj->name already
      protected by that lock.
      
      v2: idr_preload/idr_preload_end is now an atomic section, so need to
      move the mutex locking outside.
      
      [airlied: fix up conflict with patch to make debugfs use lock]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      cd4f013f
    • D
      drm/gem: make drm_gem_object_handle_unreference_unlocked static · becee2a5
      Daniel Vetter 提交于
      No one outside of drm should use this, the official interfaces are
      drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting
      is purely an implementation detail of gem.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      becee2a5
    • D
      drm/gem: fix up flink name create race · a8e11d1c
      Daniel Vetter 提交于
      This is the 2nd attempt, I've always been a bit dissatisified with the
      tricky nature of the first one:
      
      http://lists.freedesktop.org/archives/dri-devel/2012-July/025451.html
      
      The issue is that the flink ioctl can race with calling gem_close on
      the last gem handle. In that case we'll end up with a zero handle
      count, but an flink name (and it's corresponding reference). Which
      results in a neat space leak.
      
      In my first attempt I've solved this by rechecking the handle count.
      But fundamentally the issue is that ->handle_count isn't your usual
      refcount - it can be resurrected from 0 among other things.
      
      For those special beasts atomic_t often suggest way more ordering that
      it actually guarantees. To prevent being tricked by those hairy
      semantics take the easy way out and simply protect the handle with the
      existing dev->object_name_lock.
      
      With that change implemented it's dead easy to fix the flink vs. gem
      close reace: When we try to create the name we simply have to check
      whether there's still officially a gem handle around and if not refuse
      to create the flink name. Since the handle count decrement and flink
      name destruction is now also protected by that lock the reace is gone
      and we can't ever leak the flink reference again.
      
      Outside of the drm core only the exynos driver looks at the handle
      count, and tbh I have no idea why (it's just for debug dmesg output
      luckily).
      
      I've considered inlining the drm_gem_object_handle_free, but I plan to
      add more name-like things (like the exported dma_buf) to this scheme,
      so it's clearer to leave the handle freeing in its own function.
      
      This is exercised by the new gem_flink_race i-g-t testcase, which on
      my snb leaks gem objects at a rate of roughly 1k objects/s.
      
      v2: Fix up the error path handling in handle_create and make it more
      robust by simply calling object_handle_unreference.
      
      v3: Fix up the handle_unreference logic bug - atomic_dec_and_test
      retursn 1 for 0. Oops.
      
      v4: Squash in inlining of drm_gem_object_handle_reference as suggested
      by Dave Airlie and add a note that we now have a testcase.
      
      Cc: Dave Airlie <airlied@gmail.com>
      Cc: Inki Dae <inki.dae@samsung.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a8e11d1c
    • L
      drm: Make drm_get_platform_dev() static · 66cc8b6b
      Lespiau, Damien 提交于
      It's only used in drm_platform.c.
      Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      66cc8b6b
    • L
      15f3b9d9
    • L
      drm: Make drm_fb_cma_describe() static · 2c9c52e8
      Lespiau, Damien 提交于
      This function is only used in drm_fb_cma_helper.c.
      Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      2c9c52e8
    • L
      drm: Remove 2 unused defines · a03eb838
      Lespiau, Damien 提交于
      These were introduced in the very first DRM commit:
      
        commit f453ba04
        Author: Dave Airlie <airlied@redhat.com>
        Date:   Fri Nov 7 14:05:41 2008 -0800
      
            DRM: add mode setting support
      
            Add mode setting support to the DRM layer.
      
      But are unused.
      Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a03eb838
    • L
      drm: Make drm_mode_remove() static · 86f422d5
      Lespiau, Damien 提交于
      It's only used in drm_crtc.c.
      Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      86f422d5
    • L
      drm: Remove drm_mode_list_concat() · 67587e86
      Lespiau, Damien 提交于
      The last user was removed in
      
        commit 575dc34e
        Author: Dave Airlie <airlied@redhat.com>
        Date:   Mon Sep 7 18:43:26 2009 +1000
      
            drm/kms: remove old std mode fallback code.
      
            The new code adds modes in the helper, which makes more sense
            I disliked the non-driver code adding modes.
      Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com>
      Reviewed-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      67587e86