1. 23 9月, 2017 1 次提交
    • J
      x86/asm: Fix inline asm call constraints for Clang · f5caf621
      Josh Poimboeuf 提交于
      For inline asm statements which have a CALL instruction, we list the
      stack pointer as a constraint to convince GCC to ensure the frame
      pointer is set up first:
      
        static inline void foo()
        {
      	register void *__sp asm(_ASM_SP);
      	asm("call bar" : "+r" (__sp))
        }
      
      Unfortunately, that pattern causes Clang to corrupt the stack pointer.
      
      The fix is easy: convert the stack pointer register variable to a global
      variable.
      
      It should be noted that the end result is different based on the GCC
      version.  With GCC 6.4, this patch has exactly the same result as
      before:
      
      	defconfig	defconfig-nofp	distro		distro-nofp
       before	9820389		9491555		8816046		8516940
       after	9820389		9491555		8816046		8516940
      
      With GCC 7.2, however, GCC's behavior has changed.  It now changes its
      behavior based on the conversion of the register variable to a global.
      That somehow convinces it to *always* set up the frame pointer before
      inserting *any* inline asm.  (Therefore, listing the variable as an
      output constraint is a no-op and is no longer necessary.)  It's a bit
      overkill, but the performance impact should be negligible.  And in fact,
      there's a nice improvement with frame pointers disabled:
      
      	defconfig	defconfig-nofp	distro		distro-nofp
       before	9796316		9468236		9076191		8790305
       after	9796957		9464267		9076381		8785949
      
      So in summary, while listing the stack pointer as an output constraint
      is no longer necessary for newer versions of GCC, it's still needed for
      older versions.
      Suggested-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reported-by: NMatthias Kaehlcke <mka@chromium.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dmitriy Vyukov <dvyukov@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/3db862e970c432ae823cf515c52b54fec8270e0e.1505942196.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f5caf621
  2. 08 7月, 2017 1 次提交
    • T
      x86/syscalls: Check address limit on user-mode return · 5ea0727b
      Thomas Garnier 提交于
      Ensure the address limit is a user-mode segment before returning to
      user-mode. Otherwise a process can corrupt kernel-mode memory and elevate
      privileges [1].
      
      The set_fs function sets the TIF_SETFS flag to force a slow path on
      return. In the slow path, the address limit is checked to be USER_DS if
      needed.
      
      The addr_limit_user_check function is added as a cross-architecture
      function to check the address limit.
      
      [1] https://bugs.chromium.org/p/project-zero/issues/detail?id=990Signed-off-by: NThomas Garnier <thgarnie@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: kernel-hardening@lists.openwall.com
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Pratyush Anand <panand@redhat.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Will Drewry <wad@chromium.org>
      Cc: linux-api@vger.kernel.org
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Link: http://lkml.kernel.org/r/20170615011203.144108-1-thgarnie@google.com
      5ea0727b
  3. 04 7月, 2017 1 次提交
  4. 22 5月, 2017 3 次提交
    • L
      x86: fix 32-bit case of __get_user_asm_u64() · 33c9e972
      Linus Torvalds 提交于
      The code to fetch a 64-bit value from user space was entirely buggered,
      and has been since the code was merged in early 2016 in commit
      b2f68038 ("x86/mm/32: Add support for 64-bit __get_user() on 32-bit
      kernels").
      
      Happily the buggered routine is almost certainly entirely unused, since
      the normal way to access user space memory is just with the non-inlined
      "get_user()", and the inlined version didn't even historically exist.
      
      The normal "get_user()" case is handled by external hand-written asm in
      arch/x86/lib/getuser.S that doesn't have either of these issues.
      
      There were two independent bugs in __get_user_asm_u64():
      
       - it still did the STAC/CLAC user space access marking, even though
         that is now done by the wrapper macros, see commit 11f1a4b9
         ("x86: reorganize SMAP handling in user space accesses").
      
         This didn't result in a semantic error, it just means that the
         inlined optimized version was hugely less efficient than the
         allegedly slower standard version, since the CLAC/STAC overhead is
         quite high on modern Intel CPU's.
      
       - the double register %eax/%edx was marked as an output, but the %eax
         part of it was touched early in the asm, and could thus clobber other
         inputs to the asm that gcc didn't expect it to touch.
      
         In particular, that meant that the generated code could look like
         this:
      
              mov    (%eax),%eax
              mov    0x4(%eax),%edx
      
         where the load of %edx obviously was _supposed_ to be from the 32-bit
         word that followed the source of %eax, but because %eax was
         overwritten by the first instruction, the source of %edx was
         basically random garbage.
      
      The fixes are trivial: remove the extraneous STAC/CLAC entries, and mark
      the 64-bit output as early-clobber to let gcc know that no inputs should
      alias with the output register.
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: stable@kernel.org   # v4.8+
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33c9e972
    • L
      Clean up x86 unsafe_get/put_user() type handling · 334a023e
      Linus Torvalds 提交于
      Al noticed that unsafe_put_user() had type problems, and fixed them in
      commit a7cc722f ("fix unsafe_put_user()"), which made me look more
      at those functions.
      
      It turns out that unsafe_get_user() had a type issue too: it limited the
      largest size of the type it could handle to "unsigned long".  Which is
      fine with the current users, but doesn't match our existing normal
      get_user() semantics, which can also handle "u64" even when that does
      not fit in a long.
      
      While at it, also clean up the type cast in unsafe_put_user().  We
      actually want to just make it an assignment to the expected type of the
      pointer, because we actually do want warnings from types that don't
      convert silently.  And it makes the code more readable by not having
      that one very long and complex line.
      
      [ This patch might become stable material if we ever end up back-porting
        any new users of the unsafe uaccess code, but as things stand now this
        doesn't matter for any current existing uses. ]
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      334a023e
    • A
      fix unsafe_put_user() · a7cc722f
      Al Viro 提交于
      __put_user_size() relies upon its first argument having the same type as what
      the second one points to; the only other user makes sure of that and
      unsafe_put_user() should do the same.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a7cc722f
  5. 16 5月, 2017 1 次提交
  6. 30 3月, 2017 1 次提交
  7. 29 3月, 2017 1 次提交
  8. 06 3月, 2017 2 次提交
  9. 06 12月, 2016 1 次提交
  10. 28 9月, 2016 1 次提交
    • A
      x86: separate extable.h, switch sections.h to it · 45caf470
      Al Viro 提交于
      drivers/platform/x86/dell-smo8800.c is touched due to the following obscenity:
      drivers/platform/x86/dell-smo8800.c ->
      	linux/interrupt.h ->
      		linux/hardirq.h ->
      			asm/hardirq.h ->
      				linux/irq.h ->
      					asm/hw_irq.h ->
      						asm/sections.h ->
      							asm/uaccess.h
      is the only chain of includes pulling asm/uaccess.h there.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      45caf470
  11. 16 9月, 2016 1 次提交
  12. 07 9月, 2016 1 次提交
    • K
      x86/uaccess: force copy_*_user() to be inlined · e6971009
      Kees Cook 提交于
      As already done with __copy_*_user(), mark copy_*_user() as __always_inline.
      Without this, the checks for things like __builtin_const_p() won't work
      consistently in either hardened usercopy nor the recent adjustments for
      detecting usercopy overflows at compile time.
      
      The change in kernel text size is detectable, but very small:
      
       text      data     bss     dec      hex     filename
      12118735  5768608 14229504 32116847 1ea106f vmlinux.before
      12120207  5768608 14229504 32118319 1ea162f vmlinux.after
      Signed-off-by: NKees Cook <keescook@chromium.org>
      e6971009
  13. 31 8月, 2016 1 次提交
    • J
      mm/usercopy: get rid of CONFIG_DEBUG_STRICT_USER_COPY_CHECKS · 0d025d27
      Josh Poimboeuf 提交于
      There are three usercopy warnings which are currently being silenced for
      gcc 4.6 and newer:
      
      1) "copy_from_user() buffer size is too small" compile warning/error
      
         This is a static warning which happens when object size and copy size
         are both const, and copy size > object size.  I didn't see any false
         positives for this one.  So the function warning attribute seems to
         be working fine here.
      
         Note this scenario is always a bug and so I think it should be
         changed to *always* be an error, regardless of
         CONFIG_DEBUG_STRICT_USER_COPY_CHECKS.
      
      2) "copy_from_user() buffer size is not provably correct" compile warning
      
         This is another static warning which happens when I enable
         __compiletime_object_size() for new compilers (and
         CONFIG_DEBUG_STRICT_USER_COPY_CHECKS).  It happens when object size
         is const, but copy size is *not*.  In this case there's no way to
         compare the two at build time, so it gives the warning.  (Note the
         warning is a byproduct of the fact that gcc has no way of knowing
         whether the overflow function will be called, so the call isn't dead
         code and the warning attribute is activated.)
      
         So this warning seems to only indicate "this is an unusual pattern,
         maybe you should check it out" rather than "this is a bug".
      
         I get 102(!) of these warnings with allyesconfig and the
         __compiletime_object_size() gcc check removed.  I don't know if there
         are any real bugs hiding in there, but from looking at a small
         sample, I didn't see any.  According to Kees, it does sometimes find
         real bugs.  But the false positive rate seems high.
      
      3) "Buffer overflow detected" runtime warning
      
         This is a runtime warning where object size is const, and copy size >
         object size.
      
      All three warnings (both static and runtime) were completely disabled
      for gcc 4.6 with the following commit:
      
        2fb0815c ("gcc4: disable __compiletime_object_size for GCC 4.6+")
      
      That commit mistakenly assumed that the false positives were caused by a
      gcc bug in __compiletime_object_size().  But in fact,
      __compiletime_object_size() seems to be working fine.  The false
      positives were instead triggered by #2 above.  (Though I don't have an
      explanation for why the warnings supposedly only started showing up in
      gcc 4.6.)
      
      So remove warning #2 to get rid of all the false positives, and re-enable
      warnings #1 and #3 by reverting the above commit.
      
      Furthermore, since #1 is a real bug which is detected at compile time,
      upgrade it to always be an error.
      
      Having done all that, CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is no longer
      needed.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0d025d27
  14. 09 8月, 2016 1 次提交
    • L
      unsafe_[get|put]_user: change interface to use a error target label · 1bd4403d
      Linus Torvalds 提交于
      When I initially added the unsafe_[get|put]_user() helpers in commit
      5b24a7a2 ("Add 'unsafe' user access functions for batched
      accesses"), I made the mistake of modeling the interface on our
      traditional __[get|put]_user() functions, which return zero on success,
      or -EFAULT on failure.
      
      That interface is fairly easy to use, but it's actually fairly nasty for
      good code generation, since it essentially forces the caller to check
      the error value for each access.
      
      In particular, since the error handling is already internally
      implemented with an exception handler, and we already use "asm goto" for
      various other things, we could fairly easily make the error cases just
      jump directly to an error label instead, and avoid the need for explicit
      checking after each operation.
      
      So switch the interface to pass in an error label, rather than checking
      the error value in the caller.  Best do it now before we start growing
      more users (the signal handling code in particular would be a good place
      to use the new interface).
      
      So rather than
      
      	if (unsafe_get_user(x, ptr))
      		... handle error ..
      
      the interface is now
      
      	unsafe_get_user(x, ptr, label);
      
      where an error during the user mode fetch will now just cause a jump to
      'label' in the caller.
      
      Right now the actual _implementation_ of this all still ends up being a
      "if (err) goto label", and does not take advantage of any exception
      label tricks, but for "unsafe_put_user()" in particular it should be
      fairly straightforward to convert to using the exception table model.
      
      Note that "unsafe_get_user()" is much harder to convert to a clever
      exception table model, because current versions of gcc do not allow the
      use of "asm goto" (for the exception) with output values (for the actual
      value to be fetched).  But that is hopefully not a limitation in the
      long term.
      
      [ Also note that it might be a good idea to switch unsafe_get_user() to
        actually _return_ the value it fetches from user space, but this
        commit only changes the error handling semantics ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1bd4403d
  15. 27 7月, 2016 1 次提交
  16. 15 7月, 2016 2 次提交
  17. 21 5月, 2016 1 次提交
  18. 12 5月, 2016 1 次提交
    • M
      x86/extable: ensure entries are swapped completely when sorting · 50c73890
      Mathias Krause 提交于
      The x86 exception table sorting was changed in commit 29934b0f
      ("x86/extable: use generic search and sort routines") to use the arch
      independent code in lib/extable.c.  However, the patch was mangled
      somehow on its way into the kernel from the last version posted at [1].
      The committed version kind of attempted to incorporate the changes of
      commit 548acf19 ("x86/mm: Expand the exception table logic to allow
      new handling options") as in _completely_ _ignoring_ the x86 specific
      'handler' member of struct exception_table_entry.  This effectively
      broke the sorting as entries will only partly be swapped now.
      
      Fortunately, the x86 Kconfig selects BUILDTIME_EXTABLE_SORT, so the
      exception table doesn't need to be sorted at runtime. However, in case
      that ever changes, we better not break the exception table sorting just
      because of that.
      
      [ Ard Biesheuvel points out that BUILDTIME_EXTABLE_SORT applies to the
        core image only, but we still rely on the sorting routines for modules
        in that case - Linus ]
      
      Fix this by providing a swap_ex_entry_fixup() macro that takes care of
      the 'handler' member.
      
      [1] https://lkml.org/lkml/2016/1/27/232Signed-off-by: NMathias Krause <minipli@googlemail.com>
      Fixes: 29934b0f ("x86/extable: use generic search and sort routines")
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50c73890
  19. 11 5月, 2016 1 次提交
    • M
      x86/extable: Ensure entries are swapped completely when sorting · 67d7a982
      Mathias Krause 提交于
      The x86 exception table sorting was changed in this recent commit:
      
        29934b0f ("x86/extable: use generic search and sort routines")
      
      ... to use the arch independent code in lib/extable.c. However, the
      patch was mangled somehow on its way into the kernel from the last
      version posted at:
      
        https://lkml.org/lkml/2016/1/27/232
      
      The committed version kind of attempted to incorporate the changes of
      contemporary commit done in the x86 tree:
      
        548acf19 ("x86/mm: Expand the exception table logic to allow new handling options")
      
      ... as in _completely_ _ignoring_ the x86 specific 'handler' member of
      struct exception_table_entry. This effectively broke the sorting as
      entries will only be partly swapped now.
      
      Fortunately, the x86 Kconfig selects BUILDTIME_EXTABLE_SORT, so the
      exception table doesn't need to be sorted at runtime. However, in case
      that ever changes, we better not break the exception table sorting just
      because of that.
      
      Fix this by providing a swap_ex_entry_fixup() macro that takes care of
      the 'handler' member.
      Signed-off-by: NMathias Krause <minipli@googlemail.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Link: http://lkml.kernel.org/r/1462914422-2911-1-git-send-email-minipli@googlemail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      67d7a982
  20. 13 4月, 2016 3 次提交
  21. 23 3月, 2016 1 次提交
  22. 24 2月, 2016 1 次提交
  23. 18 2月, 2016 1 次提交
  24. 18 12月, 2015 2 次提交
    • L
      Add 'unsafe' user access functions for batched accesses · 5b24a7a2
      Linus Torvalds 提交于
      The naming is meant to discourage random use: the helper functions are
      not really any more "unsafe" than the traditional double-underscore
      functions (which need the address range checking), but they do need even
      more infrastructure around them, and should not be used willy-nilly.
      
      In addition to checking the access range, these user access functions
      require that you wrap the user access with a "user_acess_{begin,end}()"
      around it.
      
      That allows architectures that implement kernel user access control
      (x86: SMAP, arm64: PAN) to do the user access control in the wrapping
      user_access_begin/end part, and then batch up the actual user space
      accesses using the new interfaces.
      
      The main (and hopefully only) use for these are for core generic access
      helpers, initially just the generic user string functions
      (strnlen_user() and strncpy_from_user()).
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b24a7a2
    • L
      x86: reorganize SMAP handling in user space accesses · 11f1a4b9
      Linus Torvalds 提交于
      This reorganizes how we do the stac/clac instructions in the user access
      code.  Instead of adding the instructions directly to the same inline
      asm that does the actual user level access and exception handling, add
      them at a higher level.
      
      This is mainly preparation for the next step, where we will expose an
      interface to allow users to mark several accesses together as being user
      space accesses, but it does already clean up some code:
      
       - the inlined trivial cases of copy_in_user() now do stac/clac just
         once over the accesses: they used to do one pair around the user
         space read, and another pair around the write-back.
      
       - the {get,put}_user_ex() macros that are used with the catch/try
         handling don't do any stac/clac at all, because that happens in the
         try/catch surrounding them.
      
      Other than those two cleanups that happened naturally from the
      re-organization, this should not make any difference. Yet.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      11f1a4b9
  25. 23 11月, 2015 1 次提交
  26. 07 10月, 2015 2 次提交
  27. 19 5月, 2015 1 次提交
  28. 13 1月, 2015 1 次提交
  29. 28 12月, 2013 2 次提交
  30. 17 12月, 2013 1 次提交
  31. 26 10月, 2013 1 次提交