1. 08 11月, 2017 1 次提交
  2. 25 10月, 2017 1 次提交
    • M
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns... · 6aa7de05
      Mark Rutland 提交于
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
      
      Please do not apply this to mainline directly, instead please re-run the
      coccinelle script shown below and apply its output.
      
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't harmful, and changing them results in
      churn.
      
      However, for some features, the read/write distinction is critical to
      correct operation. To distinguish these cases, separate read/write
      accessors must be used. This patch migrates (most) remaining
      ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
      coccinelle script:
      
      ----
      // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
      // WRITE_ONCE()
      
      // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
      
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: tj@kernel.org
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6aa7de05
  3. 08 7月, 2017 1 次提交
    • T
      x86/syscalls: Check address limit on user-mode return · 5ea0727b
      Thomas Garnier 提交于
      Ensure the address limit is a user-mode segment before returning to
      user-mode. Otherwise a process can corrupt kernel-mode memory and elevate
      privileges [1].
      
      The set_fs function sets the TIF_SETFS flag to force a slow path on
      return. In the slow path, the address limit is checked to be USER_DS if
      needed.
      
      The addr_limit_user_check function is added as a cross-architecture
      function to check the address limit.
      
      [1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: kernel-hardening@lists.openwall.com
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Pratyush Anand <panand@redhat.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Will Drewry <wad@chromium.org>
      Cc: linux-api@vger.kernel.org
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Link: http://lkml.kernel.org/r/20170615011203.144108-1-thgarnie@google.com
      5ea0727b
  4. 08 3月, 2017 1 次提交
  5. 02 3月, 2017 1 次提交
  6. 25 12月, 2016 1 次提交
  7. 15 9月, 2016 2 次提交
  8. 27 7月, 2016 1 次提交
  9. 10 7月, 2016 2 次提交
  10. 15 6月, 2016 2 次提交
  11. 19 4月, 2016 1 次提交
  12. 10 3月, 2016 3 次提交
  13. 17 2月, 2016 1 次提交
  14. 30 1月, 2016 1 次提交
  15. 29 1月, 2016 1 次提交
  16. 21 12月, 2015 1 次提交
  17. 18 10月, 2015 1 次提交
  18. 09 10月, 2015 12 次提交
  19. 07 10月, 2015 1 次提交
  20. 05 8月, 2015 1 次提交
  21. 17 7月, 2015 1 次提交
  22. 07 7月, 2015 3 次提交
    • A
      x86/entry: Add new, comprehensible entry and exit handlers written in C · c5c46f59
      Andy Lutomirski 提交于
      The current x86 entry and exit code, written in a mixture of assembly and
      C code, is incomprehensible due to being open-coded in a lot of places
      without coherent documentation.
      
      It appears to work primary by luck and duct tape: i.e. obvious runtime
      failures were fixed on-demand, without re-thinking the design.
      
      Due to those reasons our confidence level in that code is low, and it is
      very difficult to incrementally improve.
      
      Add new code written in C, in preparation for simply deleting the old
      entry code.
      
      prepare_exit_to_usermode() is a new function that will handle all
      slow path exits to user mode.  It is called with IRQs disabled
      and it leaves us in a state in which it is safe to immediately
      return to user mode.  IRQs must not be re-enabled at any point
      after prepare_exit_to_usermode() returns and user mode is actually
      entered. (We can, of course, fail to enter user mode and treat
      that failure as a fresh entry to kernel mode.)
      
      All callers of do_notify_resume() will be migrated to call
      prepare_exit_to_usermode() instead; prepare_exit_to_usermode() needs
      to do everything that do_notify_resume() does today, but it also
      takes care of scheduling and context tracking.  Unlike
      do_notify_resume(), it does not need to be called in a loop.
      
      syscall_return_slowpath() is exactly what it sounds like: it will
      be called on any syscall exit slow path. It will replace
      syscall_trace_leave() and it calls prepare_exit_to_usermode() on the
      way out.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Denys Vlasenko <vda.linux@googlemail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: paulmck@linux.vnet.ibm.com
      Link: http://lkml.kernel.org/r/c57c8b87661a4152801d7d3786eac2d1a2f209dd.1435952415.git.luto@kernel.org
      [ Improved the changelog a bit. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c5c46f59
    • A
      x86/entry: Add enter_from_user_mode() and use it in syscalls · feed36cd
      Andy Lutomirski 提交于
      Changing the x86 context tracking hooks is dangerous because
      there are no good checks that we track our context correctly.
      Add a helper to check that we're actually in CONTEXT_USER when
      we enter from user mode and wire it up for syscall entries.
      
      Subsequent patches will wire this up for all non-NMI entries as
      well.  NMIs are their own special beast and cannot currently
      switch overall context tracking state.  Instead, they have their
      own special RCU hooks.
      
      This is a tiny speedup if !CONFIG_CONTEXT_TRACKING (removes a
      branch) and a tiny slowdown if CONFIG_CONTEXT_TRACING (adds a
      layer of indirection).  Eventually, we should fix up the core
      context tracking code to supply a function that does what we
      want (and can be much simpler than user_exit), which will enable
      us to get rid of the extra call.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Denys Vlasenko <vda.linux@googlemail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: paulmck@linux.vnet.ibm.com
      Link: http://lkml.kernel.org/r/853b42420066ec3fb856779cdc223a6dcb5d355b.1435952415.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      feed36cd
    • A
      x86/entry: Move C entry and exit code to arch/x86/entry/common.c · 1f484aa6
      Andy Lutomirski 提交于
      The entry and exit C helpers were confusingly scattered between
      ptrace.c and signal.c, even though they aren't specific to
      ptrace or signal handling.  Move them together in a new file.
      
      This change just moves code around.  It doesn't change anything.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Denys Vlasenko <vda.linux@googlemail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: paulmck@linux.vnet.ibm.com
      Link: http://lkml.kernel.org/r/324d686821266544d8572423cc281f961da445f4.1435952415.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1f484aa6