1. 16 3月, 2014 3 次提交
  2. 13 3月, 2014 1 次提交
  3. 21 1月, 2014 1 次提交
  4. 14 1月, 2014 1 次提交
  5. 18 12月, 2013 1 次提交
  6. 09 10月, 2013 1 次提交
    • D
      drm: kill ->gem_init_object() and friends · 16eb5f43
      David Herrmann 提交于
      All drivers embed gem-objects into their own buffer objects. There is no
      reason to keep drm_gem_object_alloc(), gem->driver_private and
      ->gem_init_object() anymore.
      
      New drivers are highly encouraged to do the same. There is no benefit in
      allocating gem-objects separately.
      
      Cc: Dave Airlie <airlied@gmail.com>
      Cc: Alex Deucher <alexdeucher@gmail.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Inki Dae <inki.dae@samsung.com>
      Cc: Ben Skeggs <skeggsb@gmail.com>
      Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Acked-by: NAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      16eb5f43
  7. 30 8月, 2013 1 次提交
  8. 27 8月, 2013 2 次提交
    • D
      drm/gem: implement vma access management · ca481c9b
      David Herrmann 提交于
      We implement automatic vma mmap() access management for all drivers using
      gem_mmap. We use the vma manager to add each open-file that creates a
      gem-handle to the vma-node of the underlying gem object. Once the handle
      is destroyed, we drop the open-file again.
      
      This allows us to use drm_vma_node_is_allowed() on _any_ gem object to see
      whether an open-file is granted access. In drm_gem_mmap() we use this to
      verify that unprivileged users cannot guess gem offsets and map arbitrary
      buffers.
      
      Note that this manages access for _all_ gem users (also TTM+GEM), but the
      actual access checks are only done for drm_gem_mmap(). TTM drivers use the
      TTM mmap helpers, which need to do that separately.
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      ca481c9b
    • D
      drm/vma: add access management helpers · 88d7ebe5
      David Herrmann 提交于
      The VMA offset manager uses a device-global address-space. Hence, any
      user can currently map any offset-node they want. They only need to guess
      the right offset. If we wanted per open-file offset spaces, we'd either
      need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
      both doesn't really scale, we implement access management in the VMA
      manager itself.
      
      We use an rb-tree to store open-files for each VMA node. On each mmap
      call, GEM, TTM or the drivers must check whether the current user is
      allowed to map this file.
      
      We add a separate lock for each node as there is no generic lock available
      for the caller to protect the node easily.
      
      As we currently don't know whether an object may be used for mmap(), we
      have to do access management for all objects. If it turns out to slow down
      handle creation/deletion significantly, we can optimize it in several
      ways:
       - Most times only a single filp is added per bo so we could use a static
         "struct file *main_filp" which is checked/added/removed first before we
         fall back to the rbtree+drm_vma_offset_file.
         This could be even done lockless with rcu.
       - Let user-space pass a hint whether mmap() should be supported on the
         bo and avoid access-management if not.
       - .. there are probably more ideas once we have benchmarks ..
      
      v2: add drm_vma_node_verify_access() helper
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      88d7ebe5
  9. 21 8月, 2013 7 次提交
    • D
      drm/prime: Always add exported buffers to the handle cache · d0b2c533
      Daniel Vetter 提交于
      ... not only when the dma-buf is freshly created. In contrived
      examples someone else could have exported/imported the dma-buf already
      and handed us the gem object with a flink name. If such on object gets
      reexported as a dma_buf we won't have it in the handle cache already,
      which breaks the guarantee that for dma-buf imports we always hand
      back an existing handle if there is one.
      
      This is exercised by igt/prime_self_import/with_one_bo_two_files
      
      Now if we extend the locked sections just a notch more we can also
      plug th racy buf/handle cache setup in handle_to_fd:
      
      If evil userspace races a concurrent gem close against a prime export
      operation we can end up tearing down the gem handle before the dma buf
      handle cache is set up. When handle_to_fd gets around to adding the
      handle to the cache there will be no one left to clean it up,
      effectily leaking the bo (and the dma-buf, since the handle cache
      holds a ref on the dma-buf):
      
      Thread A			Thread B
      
      handle_to_fd:
      
      lookup gem object from handle
      creates new dma_buf
      
      				gem_close on the same handle
      				obj->dma_buf is set, but file priv buf
      				handle cache has no entry
      
      				obj->handle_count drops to 0
      
      drm_prime_add_buf_handle sets up the handle cache
      
      -> We have a dma-buf reference in the handle cache, but since the
      handle_count of the gem object already dropped to 0 no on will clean
      it up. When closing the drm device fd we'll hit the WARN_ON in
      drm_prime_destroy_file_private.
      
      The important change is to extend the critical section of the
      filp->prime.lock to cover the gem handle lookup. This serializes with
      a concurrent gem handle close.
      
      This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      d0b2c533
    • D
      drm/prime: Simplify drm_gem_remove_prime_handles · 838cd445
      Daniel Vetter 提交于
      with the reworking semantics and locking of the obj->dma_buf pointer
      this pointer is always set as long as there's still a gem handle
      around and a dma_buf associated with this gem object.
      
      Also, the per file-priv lookup-cache for dma-buf importing is also
      unified between foreign and native objects.
      
      Hence we don't need to special case the clean any more and can simply
      drop the clause which only runs for foreing objects, i.e. with
      obj->import_attach set.
      
      Note that with this change (actually with the previous one to always
      set up obj->dma_buf even for foreign objects) it is no longer required
      to set obj->import_attach when importing a foreing object. So update
      comments accordingly, too.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      838cd445
    • D
      drm/prime: proper locking+refcounting for obj->dma_buf link · 319c933c
      Daniel Vetter 提交于
      The export dma-buf cache is semantically similar to an flink name. So
      semantically it makes sense to treat it the same and remove the name
      (i.e. the dma_buf pointer) and its references when the last gem handle
      disappears.
      
      Again we need to be careful, but double so: Not just could someone
      race and export with a gem close ioctl (so we need to recheck
      obj->handle_count again when assigning the new name), but multiple
      exports can also race against each another. This is prevented by
      holding the dev->object_name_lock across the entire section which
      touches obj->dma_buf.
      
      With the new scheme we also need to reinstate the obj->dma_buf link at
      import time (in case the only reference userspace has held in-between
      was through the dma-buf fd and not through any native gem handle). For
      simplicity we don't check whether it's a native object but
      unconditionally set up that link - with the new scheme of removing the
      obj->dma_buf reference when the last handle disappears we can do that.
      
      To make it clear that this is not just for exported buffers anymore
      als rename it from export_dma_buf to dma_buf.
      
      To make sure that now one can race a fd_to_handle or handle_to_fd with
      gem_close we use the same tricks as in flink of extending the
      dev->object_name_locking critical section. With this change we finally
      have a guaranteed 1:1 relationship (at least for native objects)
      between gem objects and dma-bufs, even accounting for races (which can
      happen since the dma-buf itself holds a reference while in-flight).
      
      This prevent igt/prime_self_import/export-vs-gem_close-race from
      Oopsing the kernel. There is still a leak though since the per-file
      priv dma-buf/handle cache handling is racy. That will be fixed in a
      later patch.
      
      v2: Remove the bogus dma_buf_put from the export_and_register_object
      failure path if we've raced with the handle count dropping to 0.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      319c933c
    • D
      drm/gem: completely close gem_open vs. gem_close races · 20228c44
      Daniel Vetter 提交于
      The gem flink name holds a reference onto the object itself, and this
      self-reference would prevent an flink'ed object from every being
      freed. To break that loop we remove the flink name when the last
      userspace handle disappears, i.e. when obj->handle_count reaches 0.
      
      Now in gem_open we drop the dev->object_name_lock between the flink
      name lookup and actually adding the handle. This means a concurrent
      gem_close of the last handle could result in the flink name getting
      reaped right inbetween, i.e.
      
      Thread 1		Thread 2
      gem_open		gem_close
      
      flink -> obj lookup
      			handle_count drops to 0
      			remove flink name
      create_handle
      handle_count++
      
      If someone now flinks this object again, we'll get a new flink name.
      
      We can close this race by removing the lock dropping and making the
      entire lookup+handle_create sequence atomic. Unfortunately to still be
      able to share the handle_create logic this requires a
      handle_create_tail function which drops the lock - we can't hold the
      object_name_lock while calling into a driver's ->gem_open callback.
      
      Note that for flink fixing this race isn't really important, since
      racing gem_open against gem_close is clearly a userspace bug. And no
      matter how the race ends, we won't leak any references.
      
      But with dma-buf where the userspace dma-buf fd itself is refcounted
      this is a valid sequence and hence we should fix it. Therefore this
      patch here is just a warm-up exercise (and for consistency between
      flink buffer sharing and dma-buf buffer sharing with self-imports).
      
      Also note that this extension of the critical section in gem_open
      protected by dev->object_name_lock only works because it's now a
      mutex: A spinlock would conflict with the potential memory allocation
      in idr_preload().
      
      This is exercises by igt/gem_flink_race/flink_name.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      20228c44
    • D
      drm/gem: switch dev->object_name_lock to a mutex · cd4f013f
      Daniel Vetter 提交于
      I want to wrap the creation of a dma-buf from a gem object in it,
      so that the obj->export_dma_buf cache can be atomically filled in.
      
      Instead of creating a new mutex just for that variable I've figured
      I can reuse the existing dev->object_name_lock, especially since
      the new semantics will exactly mirror the flink obj->name already
      protected by that lock.
      
      v2: idr_preload/idr_preload_end is now an atomic section, so need to
      move the mutex locking outside.
      
      [airlied: fix up conflict with patch to make debugfs use lock]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      cd4f013f
    • D
      drm/gem: make drm_gem_object_handle_unreference_unlocked static · becee2a5
      Daniel Vetter 提交于
      No one outside of drm should use this, the official interfaces are
      drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting
      is purely an implementation detail of gem.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      becee2a5
    • D
      drm/gem: fix up flink name create race · a8e11d1c
      Daniel Vetter 提交于
      This is the 2nd attempt, I've always been a bit dissatisified with the
      tricky nature of the first one:
      
      http://lists.freedesktop.org/archives/dri-devel/2012-July/025451.html
      
      The issue is that the flink ioctl can race with calling gem_close on
      the last gem handle. In that case we'll end up with a zero handle
      count, but an flink name (and it's corresponding reference). Which
      results in a neat space leak.
      
      In my first attempt I've solved this by rechecking the handle count.
      But fundamentally the issue is that ->handle_count isn't your usual
      refcount - it can be resurrected from 0 among other things.
      
      For those special beasts atomic_t often suggest way more ordering that
      it actually guarantees. To prevent being tricked by those hairy
      semantics take the easy way out and simply protect the handle with the
      existing dev->object_name_lock.
      
      With that change implemented it's dead easy to fix the flink vs. gem
      close reace: When we try to create the name we simply have to check
      whether there's still officially a gem handle around and if not refuse
      to create the flink name. Since the handle count decrement and flink
      name destruction is now also protected by that lock the reace is gone
      and we can't ever leak the flink reference again.
      
      Outside of the drm core only the exynos driver looks at the handle
      count, and tbh I have no idea why (it's just for debug dmesg output
      luckily).
      
      I've considered inlining the drm_gem_object_handle_free, but I plan to
      add more name-like things (like the exported dma_buf) to this scheme,
      so it's clearer to leave the handle freeing in its own function.
      
      This is exercised by the new gem_flink_race i-g-t testcase, which on
      my snb leaks gem objects at a rate of roughly 1k objects/s.
      
      v2: Fix up the error path handling in handle_create and make it more
      robust by simply calling object_handle_unreference.
      
      v3: Fix up the handle_unreference logic bug - atomic_dec_and_test
      retursn 1 for 0. Oops.
      
      v4: Squash in inlining of drm_gem_object_handle_reference as suggested
      by Dave Airlie and add a note that we now have a testcase.
      
      Cc: Dave Airlie <airlied@gmail.com>
      Cc: Inki Dae <inki.dae@samsung.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      a8e11d1c
  10. 19 8月, 2013 5 次提交
  11. 07 8月, 2013 1 次提交
  12. 26 7月, 2013 1 次提交
  13. 25 7月, 2013 1 次提交
    • D
      drm/gem: convert to new unified vma manager · 0de23977
      David Herrmann 提交于
      Use the new vma manager instead of the old hashtable. Also convert all
      drivers to use the new convenience helpers. This drops all the
      (map_list.hash.key << PAGE_SHIFT) non-sense.
      
      Locking and access-management is exactly the same as before with an
      additional lock inside of the vma-manager, which strictly wouldn't be
      needed for gem.
      
      v2:
       - rebase on drm-next
       - init nodes via drm_vma_node_reset() in drm_gem.c
      v3:
       - fix tegra
      v4:
       - remove duplicate if (drm_vma_node_has_offset()) checks
       - inline now trivial drm_vma_node_offset_addr() calls
      v5:
       - skip node-reset on gem-init due to kzalloc()
       - do not allow mapping gem-objects with offsets (backwards compat)
       - remove unneccessary casts
      
      Cc: Inki Dae <inki.dae@samsung.com>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Dave Airlie <airlied@redhat.com>
      Cc: Thierry Reding <thierry.reding@gmail.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Acked-by: NPatrik Jakobsson <patrik.r.jakobsson@gmail.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@gmail.com>
      0de23977
  14. 23 7月, 2013 1 次提交
  15. 02 7月, 2013 1 次提交
  16. 28 6月, 2013 2 次提交
  17. 08 6月, 2013 1 次提交
  18. 01 5月, 2013 1 次提交
    • D
      drm/prime: keep a reference from the handle to exported dma-buf (v6) · 219b4733
      Dave Airlie 提交于
      Currently we have a problem with this:
      1. i915: create gem object
      2. i915: export gem object to prime
      3. radeon: import gem object
      4. close prime fd
      5. radeon: unref object
      6. i915: unref object
      
      i915 has an imported object reference in its file priv, that isn't
      cleaned up properly until fd close. The reference gets added at step 2,
      but at step 6 we don't have enough info to clean it up.
      
      The solution is to take a reference on the dma-buf when we export it,
      and drop the reference when the gem handle goes away.
      
      So when we export a dma_buf from a gem object, we keep track of it
      with the handle, we take a reference to the dma_buf. When we close
      the handle (i.e. userspace is finished with the buffer), we drop
      the reference to the dma_buf, and it gets collected.
      
      This patch isn't meant to fix any other problem or bikesheds, and it doesn't
      fix any races with other scenarios.
      
      v1.1: move export symbol line back up.
      
      v2: okay I had to do a bit more, as the first patch showed a leak
      on one of my tests, that I found using the dma-buf debugfs support,
      the problem case is exporting a buffer twice with the same handle,
      we'd add another export handle for it unnecessarily, however
      we now fail if we try to export the same object with a different gem handle,
      however I'm not sure if that is a case I want to support, and I've
      gotten the code to WARN_ON if we hit something like that.
      
      v2.1: rebase this patch, write better commit msg.
      v3: cleanup error handling, track import vs export in linked list,
      these two patches were separate previously, but seem to work better
      like this.
      v4: danvet is correct, this code is no longer useful, since the buffer
      better exist, so remove it.
      v5: always take a reference to the dma buf object, import or export.
      (Imre Deak contributed this originally)
      v6: square the circle, remove import vs export tracking now
      that there is no difference
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      219b4733
  19. 28 2月, 2013 2 次提交
  20. 09 10月, 2012 1 次提交
    • K
      mm: kill vma flag VM_RESERVED and mm->reserved_vm counter · 314e51b9
      Konstantin Khlebnikov 提交于
      A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
      currently it lost original meaning but still has some effects:
      
       | effect                 | alternative flags
      -+------------------------+---------------------------------------------
      1| account as reserved_vm | VM_IO
      2| skip in core dump      | VM_IO, VM_DONTDUMP
      3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
      4| do not mlock           | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
      
      This patch removes reserved_vm counter from mm_struct.  Seems like nobody
      cares about it, it does not exported into userspace directly, it only
      reduces total_vm showed in proc.
      
      Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.
      
      remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
      remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.
      
      [akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Carsten Otte <cotte@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Morris <james.l.morris@oracle.com>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Kentaro Takeda <takedakn@nttdata.co.jp>
      Cc: Matt Helsley <matthltc@us.ibm.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      314e51b9
  21. 03 10月, 2012 1 次提交
  22. 16 7月, 2012 1 次提交
    • C
      drm: Add colouring to the range allocator · 6b9d89b4
      Chris Wilson 提交于
      In order to support snoopable memory on non-LLC architectures (so that
      we can bind vgem objects into the i915 GATT for example), we have to
      avoid the prefetcher on the GPU from crossing memory domains and so
      prevent allocation of a snoopable PTE immediately following an uncached
      PTE. To do that, we need to extend the range allocator with support for
      tracking and segregating different node colours.
      
      This will be used by i915 to segregate memory domains within the GTT.
      
      v2: Now with more drm_mm helpers and less driver interference.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Dave Airlie <airlied@redhat.com
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@gmail.com>
      6b9d89b4
  23. 23 5月, 2012 1 次提交
  24. 22 5月, 2012 1 次提交
  25. 12 5月, 2012 1 次提交
    • R
      drm: pass dev to drm_vm_{open,close}_locked() · b06d66be
      Rob Clark 提交于
      Previously these functions would assume that vma->vm_file was the
      drm_file.  Although if in some cases if the drm driver needs to use
      something else for the backing file (such as the tmpfs filp) then this
      assumption is no longer true.  But vma->vm_private_data is still the
      GEM object.
      
      With this change, now the drm_device comes from the GEM object rather
      than the drm_file so the driver is more free to play with vma->vm_file.
      
      The scenario where this comes up is for mmap'ing of cached dmabuf's
      for non-coherent systems, where the driver needs to use fault handling
      and PTE shootdown to simulate coherency.  We can't use the vma->vm_file
      of the dmabuf, which is using anon_inode's address_space.  The most
      straightforward thing to do is to use the GEM object's obj->filp for
      vma->vm_file in all cases, for which we need this patch.
      Signed-off-by: NRob Clark <rob@ti.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      b06d66be