1. 29 10月, 2016 5 次提交
  2. 19 10月, 2016 1 次提交
  3. 09 9月, 2016 1 次提交
  4. 19 8月, 2016 1 次提交
  5. 05 8月, 2016 3 次提交
  6. 04 8月, 2016 5 次提交
  7. 20 7月, 2016 4 次提交
  8. 20 5月, 2016 2 次提交
  9. 18 4月, 2016 1 次提交
  10. 14 4月, 2016 2 次提交
    • C
      drm/i915: Prevent leaking of -EIO from i915_wait_request() · f4457ae7
      Chris Wilson 提交于
      Reporting -EIO from i915_wait_request() has proven very troublematic
      over the years, with numerous hard-to-reproduce bugs cropping up in the
      corner case of where a reset occurs and the code wasn't expecting such
      an error.
      
      If the we reset the GPU or have detected a hang and wish to reset the
      GPU, the request is forcibly complete and the wait broken. Currently, we
      report either -EAGAIN or -EIO in order for the caller to retreat and
      restart the wait (if appropriate) after dropping and then reacquiring
      the struct_mutex (essential to allow the GPU reset to proceed). However,
      if we take the view that the request is complete (no further work will
      be done on it by the GPU because it is dead and soon to be reset), then
      we can proceed with the task at hand and then drop the struct_mutex
      allowing the reset to occur. This transfers the burden of checking
      whether it is safe to proceed to the caller, which in all but one
      instance it is safe - completely eliminating the source of all spurious
      -EIO.
      
      Of note, we only have two API entry points where we expect that
      userspace can observe an EIO. First is when submitting an execbuf, if
      the GPU is terminally wedged, then the operation cannot succeed and an
      -EIO is reported. Secondly, existing userspace uses the throttle ioctl
      to detect an already wedged GPU before starting using HW acceleration
      (or to confirm that the GPU is wedged after an error condition). So if
      the GPU is wedged when the user calls throttle, also report -EIO.
      
      v2: Split more carefully the change to i915_wait_request() and assorted
      ABI from the reset handling.
      v3: Add a couple of WARN_ON(EIO) to the interruptible modesetting code
      so that we don't start to leak EIO there in future (and break our hang
      resistant modesetting).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-9-git-send-email-chris@chris-wilson.co.uk
      Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-1-git-send-email-chris@chris-wilson.co.uk
      f4457ae7
    • C
      drm/i915: Store the reset counter when constructing a request · 299259a3
      Chris Wilson 提交于
      As the request is only valid during the same global reset epoch, we can
      record the current reset_counter when constructing the request and reuse
      it when waiting upon that request in future. This removes a very hairy
      atomic check serialised by the struct_mutex at the time of waiting and
      allows us to transfer those waits to a central dispatcher for all
      waiters and all requests.
      
      PS: With per-engine resets, we obviously cannot assume a global reset
      epoch for the requests - a per-engine epoch makes the most sense. The
      challenge then is how to handle checking in the waiter for when to break
      the wait, as the fine-grained reset may also want to requeue the
      request (i.e. the assumption that just because the epoch changes the
      request is completed may be broken - or we just avoid breaking that
      assumption with the fine-grained resets).
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-7-git-send-email-chris@chris-wilson.co.uk
      299259a3
  11. 12 4月, 2016 4 次提交
  12. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  13. 02 3月, 2016 1 次提交
  14. 26 2月, 2016 1 次提交
  15. 16 2月, 2016 1 次提交
    • D
      mm/gup: Introduce get_user_pages_remote() · 1e987790
      Dave Hansen 提交于
      For protection keys, we need to understand whether protections
      should be enforced in software or not.  In general, we enforce
      protections when working on our own task, but not when on others.
      We call these "current" and "remote" operations.
      
      This patch introduces a new get_user_pages() variant:
      
              get_user_pages_remote()
      
      Which is a replacement for when get_user_pages() is called on
      non-current tsk/mm.
      
      We also introduce a new gup flag: FOLL_REMOTE which can be used
      for the "__" gup variants to get this new behavior.
      
      The uprobes is_trap_at_addr() location holds mmap_sem and
      calls get_user_pages(current->mm) on an instruction address.  This
      makes it a pretty unique gup caller.  Being an instruction access
      and also really originating from the kernel (vs. the app), I opted
      to consider this a 'remote' access where protection keys will not
      be enforced.
      
      Without protection keys, this patch should not change any behavior.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: jack@suse.cz
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/20160212210154.3F0E51EA@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1e987790
  16. 08 2月, 2016 1 次提交
  17. 04 2月, 2016 1 次提交
  18. 26 1月, 2016 1 次提交
  19. 13 10月, 2015 1 次提交
    • C
      drm/i915: Deny wrapping an userptr into a framebuffer · cc917ab4
      Chris Wilson 提交于
      Pinning a userptr onto the hardware raises interesting questions about
      the lifetime of such a surface as the framebuffer extends that life
      beyond the client's address space. That is the hardware will need to
      keep scanning out from the backing storage even after the client wants
      to remap its address space. As the hardware pins the backing storage,
      the userptr becomes invalid and this raises a WARN when the clients
      tries to unmap its address space. The situation can be even more
      complicated when the buffer is passed between processes, between a
      client and display server, where the lifetime and hardware access is
      even more confusing. Deny it.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      cc917ab4
  20. 06 10月, 2015 3 次提交
    • C
      drm/i915: Use a task to cancel the userptr on invalidate_range · 380996aa
      Chris Wilson 提交于
      Whilst discussing possible ways to trigger an invalidate_range on a
      userptr with an aliased GGTT mmapping (and so cause a struct_mutex
      deadlock), the conclusion is that we can, and we must, prevent any
      possible deadlock by avoiding taking the mutex at all during
      invalidate_range. This has numerous advantages all of which stem from
      avoid the sleeping function from inside the unknown context. In
      particular, it simplifies the invalidate_range because we no longer
      have to juggle the spinlock/mutex and can just hold the spinlock
      for the entire walk. To compensate, we have to make get_pages a bit more
      complicated in order to serialise with a pending cancel_userptr worker.
      As we hold the struct_mutex, we have no choice but to return EAGAIN and
      hope that the worker is then flushed before we retry after reacquiring
      the struct_mutex.
      
      The important caveat is that the invalidate_range itself is no longer
      synchronous. There exists a small but definite period in time in which
      the old PTE's page remain accessible via the GPU. Note however that the
      physical pages themselves are not invalidated by the mmu_notifier, just
      the CPU view of the address space. The impact should be limited to a
      delay in pages being flushed, rather than a possibility of writing to
      the wrong pages. The only race condition that this worsens is remapping
      an userptr active on the GPU where fresh work may still reference the
      old pages due to struct_mutex contention. Given that userspace is racing
      with the GPU, it is fair to say that the results are undefined.
      
      v2: Only queue (and importantly only take one refcnt) the worker once.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      380996aa
    • C
      drm/i915: Fix userptr deadlock with aliased GTT mmappings · e4b946bf
      Chris Wilson 提交于
      Michał Winiarski found a really evil way to trigger a struct_mutex
      deadlock with userptr. He found that if he allocated a userptr bo and
      then GTT mmaped another bo, or even itself, at the same address as the
      userptr using MAP_FIXED, he could then cause a deadlock any time we then
      had to invalidate the GTT mmappings (so at will). Tvrtko then found by
      repeatedly allocating GTT mmappings he could alias with an old userptr
      mmap and also trigger the deadlock.
      
      To counter act the deadlock, we make the observation that we only need
      to take the struct_mutex if the object has any pages to revoke, and that
      before userspace can alias with the userptr address space, it must have
      invalidated the userptr->pages. Thus if we can check for those pages
      outside of the struct_mutex, we can avoid the deadlock. To do so we
      introduce a separate flag for userptr objects that we can inspect from
      the mmu-notifier underneath its spinlock.
      
      The patch makes one eye-catching change. That is the removal serial=0
      after detecting a to-be-freed object inside the invalidate walker. I
      felt setting serial=0 was a questionable pessimisation: it denies us the
      chance to reuse the current iterator for the next loop (before it is
      freed) and being explicit makes the reader question the validity of the
      locking (since the object-free race could occur elsewhere). The
      serialisation of the iterator is through the spinlock, if the object is
      freed before the next loop then the notifier.serial will be incremented
      and we start the walk from the beginning as we detect the invalid cache.
      
      To try and tame the error paths and interactions with the userptr->active
      flag, we have to do a fair amount of rearranging of get_pages_userptr().
      
      v2: Grammar fixes
      v3: Reorder set-active so that it is only set when obj->pages is set
      (and so needs cancellation). Only the order of setting obj->pages and
      the active-flag is crucial. Calling gup after invalidate-range begin
      means the userptr sees the new set of backing storage (and so will not
      need to invalidate its new pages), but we have to be careful not to set
      the active-flag prior to successfully establishing obj->pages.
      v4: Take the active->flag early so we know in the mmu-notifier when we
      have to cancel a pending gup-worker.
      v5: Rearrange the error path so that is not so convoluted
      v6: Set pinned to 0 when negative before calling release_pages()
      Reported-by: NMichał Winiarski <michal.winiarski@intel.com>
      Testcase: igt/gem_userptr_blits/map-fixed*
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Michał Winiarski <michal.winiarski@intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      e4b946bf
    • C
      drm/i915: Only update the current userptr worker · 68d6c840
      Chris Wilson 提交于
      The userptr worker allows for a slight race condition where upon there
      may two or more threads calling get_user_pages for the same object. When
      we have the array of pages, then we serialise the update of the object.
      However, the worker should only overwrite the obj->userptr.work pointer
      if and only if it is the active one. Currently we clear it for a
      secondary worker with the effect that we may rarely force a second
      lookup.
      
      v2: Rebase and rename a variable to avoid 80cols
      v3: Mention v2
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      68d6c840