1. 05 8月, 2016 3 次提交
  2. 04 8月, 2016 5 次提交
  3. 20 7月, 2016 4 次提交
  4. 20 5月, 2016 2 次提交
  5. 18 4月, 2016 1 次提交
  6. 14 4月, 2016 2 次提交
    • C
      drm/i915: Prevent leaking of -EIO from i915_wait_request() · f4457ae7
      Chris Wilson 提交于
      Reporting -EIO from i915_wait_request() has proven very troublematic
      over the years, with numerous hard-to-reproduce bugs cropping up in the
      corner case of where a reset occurs and the code wasn't expecting such
      an error.
      
      If the we reset the GPU or have detected a hang and wish to reset the
      GPU, the request is forcibly complete and the wait broken. Currently, we
      report either -EAGAIN or -EIO in order for the caller to retreat and
      restart the wait (if appropriate) after dropping and then reacquiring
      the struct_mutex (essential to allow the GPU reset to proceed). However,
      if we take the view that the request is complete (no further work will
      be done on it by the GPU because it is dead and soon to be reset), then
      we can proceed with the task at hand and then drop the struct_mutex
      allowing the reset to occur. This transfers the burden of checking
      whether it is safe to proceed to the caller, which in all but one
      instance it is safe - completely eliminating the source of all spurious
      -EIO.
      
      Of note, we only have two API entry points where we expect that
      userspace can observe an EIO. First is when submitting an execbuf, if
      the GPU is terminally wedged, then the operation cannot succeed and an
      -EIO is reported. Secondly, existing userspace uses the throttle ioctl
      to detect an already wedged GPU before starting using HW acceleration
      (or to confirm that the GPU is wedged after an error condition). So if
      the GPU is wedged when the user calls throttle, also report -EIO.
      
      v2: Split more carefully the change to i915_wait_request() and assorted
      ABI from the reset handling.
      v3: Add a couple of WARN_ON(EIO) to the interruptible modesetting code
      so that we don't start to leak EIO there in future (and break our hang
      resistant modesetting).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-9-git-send-email-chris@chris-wilson.co.uk
      Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-1-git-send-email-chris@chris-wilson.co.uk
      f4457ae7
    • C
      drm/i915: Store the reset counter when constructing a request · 299259a3
      Chris Wilson 提交于
      As the request is only valid during the same global reset epoch, we can
      record the current reset_counter when constructing the request and reuse
      it when waiting upon that request in future. This removes a very hairy
      atomic check serialised by the struct_mutex at the time of waiting and
      allows us to transfer those waits to a central dispatcher for all
      waiters and all requests.
      
      PS: With per-engine resets, we obviously cannot assume a global reset
      epoch for the requests - a per-engine epoch makes the most sense. The
      challenge then is how to handle checking in the waiter for when to break
      the wait, as the fine-grained reset may also want to requeue the
      request (i.e. the assumption that just because the epoch changes the
      request is completed may be broken - or we just avoid breaking that
      assumption with the fine-grained resets).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-7-git-send-email-chris@chris-wilson.co.uk
      299259a3
  7. 12 4月, 2016 4 次提交
  8. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  9. 02 3月, 2016 1 次提交
  10. 26 2月, 2016 1 次提交
  11. 16 2月, 2016 1 次提交
    • D
      mm/gup: Introduce get_user_pages_remote() · 1e987790
      Dave Hansen 提交于
      For protection keys, we need to understand whether protections
      should be enforced in software or not.  In general, we enforce
      protections when working on our own task, but not when on others.
      We call these "current" and "remote" operations.
      
      This patch introduces a new get_user_pages() variant:
      
              get_user_pages_remote()
      
      Which is a replacement for when get_user_pages() is called on
      non-current tsk/mm.
      
      We also introduce a new gup flag: FOLL_REMOTE which can be used
      for the "__" gup variants to get this new behavior.
      
      The uprobes is_trap_at_addr() location holds mmap_sem and
      calls get_user_pages(current->mm) on an instruction address.  This
      makes it a pretty unique gup caller.  Being an instruction access
      and also really originating from the kernel (vs. the app), I opted
      to consider this a 'remote' access where protection keys will not
      be enforced.
      
      Without protection keys, this patch should not change any behavior.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: jack@suse.cz
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210154.3F0E51EA@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1e987790
  12. 08 2月, 2016 1 次提交
  13. 04 2月, 2016 1 次提交
  14. 26 1月, 2016 1 次提交
  15. 13 10月, 2015 1 次提交
    • C
      drm/i915: Deny wrapping an userptr into a framebuffer · cc917ab4
      Chris Wilson 提交于
      Pinning a userptr onto the hardware raises interesting questions about
      the lifetime of such a surface as the framebuffer extends that life
      beyond the client's address space. That is the hardware will need to
      keep scanning out from the backing storage even after the client wants
      to remap its address space. As the hardware pins the backing storage,
      the userptr becomes invalid and this raises a WARN when the clients
      tries to unmap its address space. The situation can be even more
      complicated when the buffer is passed between processes, between a
      client and display server, where the lifetime and hardware access is
      even more confusing. Deny it.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      cc917ab4
  16. 06 10月, 2015 3 次提交
    • C
      drm/i915: Use a task to cancel the userptr on invalidate_range · 380996aa
      Chris Wilson 提交于
      Whilst discussing possible ways to trigger an invalidate_range on a
      userptr with an aliased GGTT mmapping (and so cause a struct_mutex
      deadlock), the conclusion is that we can, and we must, prevent any
      possible deadlock by avoiding taking the mutex at all during
      invalidate_range. This has numerous advantages all of which stem from
      avoid the sleeping function from inside the unknown context. In
      particular, it simplifies the invalidate_range because we no longer
      have to juggle the spinlock/mutex and can just hold the spinlock
      for the entire walk. To compensate, we have to make get_pages a bit more
      complicated in order to serialise with a pending cancel_userptr worker.
      As we hold the struct_mutex, we have no choice but to return EAGAIN and
      hope that the worker is then flushed before we retry after reacquiring
      the struct_mutex.
      
      The important caveat is that the invalidate_range itself is no longer
      synchronous. There exists a small but definite period in time in which
      the old PTE's page remain accessible via the GPU. Note however that the
      physical pages themselves are not invalidated by the mmu_notifier, just
      the CPU view of the address space. The impact should be limited to a
      delay in pages being flushed, rather than a possibility of writing to
      the wrong pages. The only race condition that this worsens is remapping
      an userptr active on the GPU where fresh work may still reference the
      old pages due to struct_mutex contention. Given that userspace is racing
      with the GPU, it is fair to say that the results are undefined.
      
      v2: Only queue (and importantly only take one refcnt) the worker once.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      380996aa
    • C
      drm/i915: Fix userptr deadlock with aliased GTT mmappings · e4b946bf
      Chris Wilson 提交于
      Michał Winiarski found a really evil way to trigger a struct_mutex
      deadlock with userptr. He found that if he allocated a userptr bo and
      then GTT mmaped another bo, or even itself, at the same address as the
      userptr using MAP_FIXED, he could then cause a deadlock any time we then
      had to invalidate the GTT mmappings (so at will). Tvrtko then found by
      repeatedly allocating GTT mmappings he could alias with an old userptr
      mmap and also trigger the deadlock.
      
      To counter act the deadlock, we make the observation that we only need
      to take the struct_mutex if the object has any pages to revoke, and that
      before userspace can alias with the userptr address space, it must have
      invalidated the userptr->pages. Thus if we can check for those pages
      outside of the struct_mutex, we can avoid the deadlock. To do so we
      introduce a separate flag for userptr objects that we can inspect from
      the mmu-notifier underneath its spinlock.
      
      The patch makes one eye-catching change. That is the removal serial=0
      after detecting a to-be-freed object inside the invalidate walker. I
      felt setting serial=0 was a questionable pessimisation: it denies us the
      chance to reuse the current iterator for the next loop (before it is
      freed) and being explicit makes the reader question the validity of the
      locking (since the object-free race could occur elsewhere). The
      serialisation of the iterator is through the spinlock, if the object is
      freed before the next loop then the notifier.serial will be incremented
      and we start the walk from the beginning as we detect the invalid cache.
      
      To try and tame the error paths and interactions with the userptr->active
      flag, we have to do a fair amount of rearranging of get_pages_userptr().
      
      v2: Grammar fixes
      v3: Reorder set-active so that it is only set when obj->pages is set
      (and so needs cancellation). Only the order of setting obj->pages and
      the active-flag is crucial. Calling gup after invalidate-range begin
      means the userptr sees the new set of backing storage (and so will not
      need to invalidate its new pages), but we have to be careful not to set
      the active-flag prior to successfully establishing obj->pages.
      v4: Take the active->flag early so we know in the mmu-notifier when we
      have to cancel a pending gup-worker.
      v5: Rearrange the error path so that is not so convoluted
      v6: Set pinned to 0 when negative before calling release_pages()
      Reported-by: NMichał Winiarski <michal.winiarski@intel.com>
      Testcase: igt/gem_userptr_blits/map-fixed*
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      e4b946bf
    • C
      drm/i915: Only update the current userptr worker · 68d6c840
      Chris Wilson 提交于
      The userptr worker allows for a slight race condition where upon there
      may two or more threads calling get_user_pages for the same object. When
      we have the array of pages, then we serialise the update of the object.
      However, the worker should only overwrite the obj->userptr.work pointer
      if and only if it is the active one. Currently we clear it for a
      secondary worker with the effect that we may rarely force a second
      lookup.
      
      v2: Rebase and rename a variable to avoid 80cols
      v3: Mention v2
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      68d6c840
  17. 15 8月, 2015 1 次提交
  18. 14 7月, 2015 1 次提交
    • I
      drm/i915: avoid leaking DMA mappings · e2273302
      Imre Deak 提交于
      We have 3 types of DMA mappings for GEM objects:
      1. physically contiguous for stolen and for objects needing contiguous
         memory
      2. DMA-buf mappings imported via a DMA-buf attach operation
      3. SG DMA mappings for shmem backed and userptr objects
      
      For 1. and 2. the lifetime of the DMA mapping matches the lifetime of the
      corresponding backing pages and so in practice we create/release the
      mapping in the object's get_pages/put_pages callback.
      
      For 3. the lifetime of the mapping matches that of any existing GPU binding
      of the object, so we'll create the mapping when the object is bound to
      the first vma and release the mapping when the object is unbound from its
      last vma.
      
      Since the object can be bound to multiple vmas, we can end up creating a
      new DMA mapping in the 3. case even if the object already had one. This
      is not allowed by the DMA API and can lead to leaked mapping data and
      IOMMU memory space starvation in certain cases. For example HW IOMMU
      drivers (intel_iommu) allocate a new range from their memory space
      whenever a mapping is created, silently overriding a pre-existing
      mapping.
      
      Fix this by moving the creation/removal of DMA mappings to the object's
      get_pages/put_pages callbacks. These callbacks already check for and do
      an early return in case of any nested calls. This way objects of the 3.
      case also become more like the other object types.
      
      I noticed this issue by enabling DMA debugging, which got disabled after
      a while due to its internal mapping tables getting full. It also reported
      errors in connection to random other drivers that did a DMA mapping for
      an address that was previously mapped by i915 but was never released.
      Besides these diagnostic messages and the memory space starvation
      problem for IOMMUs, I'm not aware of this causing a real issue.
      
      The fix is based on a patch from Chris.
      
      v2:
      - move the DMA mapping create/remove calls to the get_pages/put_pages
        callbacks instead of adding new callbacks for these (Chris)
      v3:
      - also fix the get_page cache logic on the userptr async path (Chris)
      Signed-off-by: NImre Deak <imre.deak@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: stable@vger.kernel.org
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      e2273302
  19. 20 5月, 2015 1 次提交
    • C
      drm/i915: Use uninterruptible mutex_lock for userptr bo creation · 281400ff
      Chris Wilson 提交于
      Mika encountered one pathological scenario under X where acquiring all
      the mm locks (required to insert a mmu notifier) was very slow, so slow
      that by the time we tried to lock the struct_mutex with the usual call
      to i915_mutex_lock_interruptible(), X's signal timer had fired causing
      us to restart the ioctl (and so looped indefinitely).
      
      While I suspect this is the result of another bug (something leaking mm
      perhaps?) we can forgo the error checking and interuptible nature of the
      lock here so we only have to pay the expense once and get on with it.
      This does expose the userptr creation routine to a driver livelock
      though by not being interruptible.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      [danvet: Init ret to avoid issues reported by PRTS.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      281400ff
  20. 13 5月, 2015 1 次提交
  21. 05 2月, 2015 1 次提交
  22. 29 9月, 2014 2 次提交
    • T
      drm/i915: Do not leak pages when freeing userptr objects · c479f438
      Tvrtko Ursulin 提交于
      sg_alloc_table_from_pages() can build us a table with coalesced ranges which
      means we need to iterate over pages and not sg table entries when releasing
      page references.
      Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: "Barbalho, Rafael" <rafael.barbalho@intel.com>
      Tested-by: NRafael Barbalho <rafael.barbalho@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: stable@vger.kernel.org
      [danvet: Remove unused local variable sg.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      c479f438
    • C
      drm/i915: Do not store the error pointer for a failed userptr registration · e9681366
      Chris Wilson 提交于
      If we fail to create our mmu notification, we report the error back and
      currently store the error inside the i915_mm_struct. This not only causes
      subsequent registerations of the same mm to fail (an issue if the first
      was interrupted by a signal and needed to be restarted) but also causes
      us to eventually try and free the error pointer.
      
      [   73.419599] BUG: unable to handle kernel NULL pointer dereference at 000000000000004c
      [   73.419831] IP: [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
      [   73.420065] PGD 8650c067 PUD 870bb067 PMD 0
      [   73.420319] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
      [   73.420580] CPU: 0 PID: 42 Comm: kworker/0:1 Tainted: G        W      3.17.0-rc6+ #1561
      [   73.420837] Hardware name: Intel Corporation SandyBridge Platform/LosLunas CRB, BIOS ASNBCPT1.86C.0075.P00.1106281639 06/28/2011
      [   73.421405] Workqueue: events __i915_mm_struct_free__worker
      [   73.421724] task: ffff880088a81220 ti: ffff880088168000 task.ti: ffff880088168000
      [   73.422051] RIP: 0010:[<ffffffff8114af33>]  [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
      [   73.422410] RSP: 0018:ffff88008816bd50  EFLAGS: 00010286
      [   73.422765] RAX: 0000000000000003 RBX: ffff880086485400 RCX: 0000000000000000
      [   73.423137] RDX: ffff88016d80ee90 RSI: ffff880086485400 RDI: 0000000000000044
      [   73.423513] RBP: ffff88008816bd70 R08: 0000000000000001 R09: 0000000000000000
      [   73.423895] R10: 0000000000000320 R11: 0000000000000001 R12: 0000000000000044
      [   73.424282] R13: ffff880166e5f008 R14: ffff88016d815200 R15: ffff880166e5f040
      [   73.424682] FS:  0000000000000000(0000) GS:ffff88016d800000(0000) knlGS:0000000000000000
      [   73.425099] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   73.425537] CR2: 000000000000004c CR3: 0000000087f5f000 CR4: 00000000000407f0
      [   73.426157] Stack:
      [   73.426597]  ffff880088a81248 ffff880166e5f038 fffffffffffffffc ffff880166e5f008
      [   73.427096]  ffff88008816bd98 ffffffff814a75f2 ffff880166e5f038 ffff8800880f8a28
      [   73.427603]  ffff88016d812ac0 ffff88008816be00 ffffffff8106321a ffffffff810631af
      [   73.428119] Call Trace:
      [   73.428606]  [<ffffffff814a75f2>] __i915_mm_struct_free__worker+0x42/0x80
      [   73.429116]  [<ffffffff8106321a>] process_one_work+0x1ba/0x610
      [   73.429632]  [<ffffffff810631af>] ? process_one_work+0x14f/0x610
      [   73.430153]  [<ffffffff810636db>] worker_thread+0x6b/0x4a0
      [   73.430671]  [<ffffffff8108d67d>] ? trace_hardirqs_on+0xd/0x10
      [   73.431501]  [<ffffffff81063670>] ? process_one_work+0x610/0x610
      [   73.432030]  [<ffffffff8106a206>] kthread+0xf6/0x110
      [   73.432561]  [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80
      [   73.433100]  [<ffffffff8169c22c>] ret_from_fork+0x7c/0xb0
      [   73.433644]  [<ffffffff8106a110>] ? __kthread_parkme+0x80/0x80
      [   73.434194] Code: 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 46 4c 85 c0 0f 8e 10 01 00 00 55 48 89 e5 41 55 41 54 53 48 89 f3 49 89 fc 48 83 ec 08 <48> 83 7f 08 00 0f 84 b1 00 00 00 48 c7 c7 40 e6 ac 82 e8 26 65
      [   73.435942] RIP  [<ffffffff8114af33>] mmu_notifier_unregister+0x23/0x130
      [   73.437017]  RSP <ffff88008816bd50>
      [   73.437704] CR2: 000000000000004c
      
      Fixes regression from commit ad46cb53
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Thu Aug 7 14:20:40 2014 +0100
      
          drm/i915: Prevent recursive deadlock on releasing a busy userptr
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84207
      Testcase: igt/gem_render_copy_redux
      Testcase: igt/gem_userptr_blits/create-destroy-sync
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Jacek Danecki <jacek.danecki@intel.com>
      Cc: "Gong, Zhipeng" <zhipeng.gong@intel.com>
      Cc: Jacek Danecki <jacek.danecki@intel.com>
      Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      e9681366
  23. 08 9月, 2014 1 次提交
    • C
      drm/i915: Prevent recursive deadlock on releasing a busy userptr · ad46cb53
      Chris Wilson 提交于
      During release of the GEM object we hold the struct_mutex. As the
      object may be holding onto the last reference for the task->mm,
      calling mmput() may trigger exit_mmap() which close the vma
      which will call drm_gem_vm_close() and attempt to reacquire
      the struct_mutex. In order to avoid that recursion, we have
      to defer the mmput() until after we drop the struct_mutex,
      i.e. we need to schedule a worker to do the clean up. A further issue
      spotted by Tvrtko was caused when we took a GTT mmapping of a userptr
      buffer object. In that case, we would never call mmput as the object
      would be cyclically referenced by the GTT mmapping and not freed upon
      process exit - keeping the entire process mm alive after the process
      task was reaped. The fix employed is to replace the mm_users/mmput()
      reference handling to mm_count/mmdrop() for the shared i915_mm_struct.
      
         INFO: task test_surfaces:1632 blocked for more than 120 seconds.
               Tainted: GF          O 3.14.5+ #1
         "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
         test_surfaces   D 0000000000000000     0  1632   1590 0x00000082
          ffff88014914baa8 0000000000000046 0000000000000000 ffff88014914a010
          0000000000012c40 0000000000012c40 ffff8800a0058210 ffff88014784b010
          ffff88014914a010 ffff880037b1c820 ffff8800a0058210 ffff880037b1c824
         Call Trace:
          [<ffffffff81582499>] schedule+0x29/0x70
          [<ffffffff815825fe>] schedule_preempt_disabled+0xe/0x10
          [<ffffffff81583b93>] __mutex_lock_slowpath+0x183/0x220
          [<ffffffff81583c53>] mutex_lock+0x23/0x40
          [<ffffffffa005c2a3>] drm_gem_vm_close+0x33/0x70 [drm]
          [<ffffffff8115a483>] remove_vma+0x33/0x70
          [<ffffffff8115a5dc>] exit_mmap+0x11c/0x170
          [<ffffffff8104d6eb>] mmput+0x6b/0x100
          [<ffffffffa00f44b9>] i915_gem_userptr_release+0x89/0xc0 [i915]
          [<ffffffffa00e6706>] i915_gem_free_object+0x126/0x250 [i915]
          [<ffffffffa005c06a>] drm_gem_object_free+0x2a/0x40 [drm]
          [<ffffffffa005cc32>] drm_gem_object_handle_unreference_unlocked+0xe2/0x120 [drm]
          [<ffffffffa005ccd4>] drm_gem_object_release_handle+0x64/0x90 [drm]
          [<ffffffff8127ffeb>] idr_for_each+0xab/0x100
          [<ffffffffa005cc70>] ?  drm_gem_object_handle_unreference_unlocked+0x120/0x120 [drm]
          [<ffffffff81583c46>] ? mutex_lock+0x16/0x40
          [<ffffffffa005c354>] drm_gem_release+0x24/0x40 [drm]
          [<ffffffffa005b82b>] drm_release+0x3fb/0x480 [drm]
          [<ffffffff8118d482>] __fput+0xb2/0x260
          [<ffffffff8118d6de>] ____fput+0xe/0x10
          [<ffffffff8106f27f>] task_work_run+0x8f/0xf0
          [<ffffffff81052228>] do_exit+0x1a8/0x480
          [<ffffffff81052551>] do_group_exit+0x51/0xc0
          [<ffffffff810525d7>] SyS_exit_group+0x17/0x20
          [<ffffffff8158e092>] system_call_fastpath+0x16/0x1b
      
      v2: Incorporate feedback from Tvrtko and remove the unnessary mm
      referencing when creating the i915_mm_struct and improve some of the
      function names and comments.
      Reported-by: NJacek Danecki <jacek.danecki@intel.com>
      Test-case: igt/gem_userptr_blits/process-exit*
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Tested-by: N"Gong, Zhipeng" <zhipeng.gong@intel.com>
      Cc: Jacek Danecki <jacek.danecki@intel.com>
      Cc: "Ursulin, Tvrtko" <tvrtko.ursulin@intel.com>
      Reviewed-by: N"Ursulin, Tvrtko" <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Cc: stable@vger.kernel.org # hold off until 3.17 ships for additional testing
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      ad46cb53