1. 08 10月, 2016 1 次提交
    • D
      x86/pkeys: Make protection keys an "eager" feature · d4b05923
      Dave Hansen 提交于
      Our XSAVE features are divided into two categories: those that
      generate FPU exceptions, and those that do not.  MPX and pkeys do
      not generate FPU exceptions and thus can not be used lazily.  We
      disable them when lazy mode is forced on.
      
      We have a pair of masks to collect these two sets of features, but
      XFEATURE_MASK_PKRU was added to the wrong mask: XFEATURE_MASK_LAZY.
      Fix it by moving the feature to XFEATURE_MASK_EAGER.
      
      Note: this only causes problem if you boot with lazy FPU mode
      (eagerfpu=off) which is *not* the default.  It also only affects
      hardware which is not currently publicly available.  It looks like
      eager mode is going away, but we still need this patch applied
      to any kernel that has protection keys and lazy mode, which is 4.6
      through 4.8 at this point, and 4.9 if the lazy removal isn't sent
      to Linus for 4.9.
      
      Fixes: c8df4009 ("x86/fpu, x86/mm/pkeys: Add PKRU xsave fields and data structures")
      Signed-off-by: NDave Hansen <dave.hansen@intel.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20161007162342.28A49813@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      d4b05923
  2. 06 10月, 2016 1 次提交
    • J
      x86/unwind: Fix oprofile module link error · cfee9edd
      Josh Poimboeuf 提交于
      When compiling on x86 with CONFIG_OPROFILE=m and CONFIG_FRAME_POINTER=n,
      the oprofile module fails to link:
      
        ERROR: ftrace_graph_ret_addr" [arch/x86/oprofile/oprofile.ko] undefined!
      
      The problem was introduced when oprofile was converted to use the new
      x86 unwinder.  When frame pointers are disabled, the "guess" unwinder's
      unwind_get_return_address() is an inline function which calls
      ftrace_graph_ret_addr(), which is not exported.
      
      Fix it by converting the "guess" version of unwind_get_return_address()
      to an exported out-of-line function, just like its frame pointer
      counterpart.
      Reported-by: NKarl Beldan <karl.beldan@gmail.com>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: ec2ad9cc ("oprofile/x86: Convert x86_backtrace() to use the new unwinder")
      Link: http://lkml.kernel.org/r/be08d589f6474df78364e081c42777e382af9352.1475731632.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cfee9edd
  3. 30 9月, 2016 4 次提交
  4. 24 9月, 2016 1 次提交
  5. 22 9月, 2016 5 次提交
  6. 21 9月, 2016 1 次提交
  7. 20 9月, 2016 6 次提交
    • J
      x86/dumpstack: Remove dump_trace() and related callbacks · c8fe4609
      Josh Poimboeuf 提交于
      All previous users of dump_trace() have been converted to use the new
      unwind interfaces, so we can remove it and the related
      print_context_stack() and print_context_stack_bp() callback functions.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/5b97da3572b40b5a4d8e185cf2429308d0987a13.1474045023.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c8fe4609
    • J
      x86/dumpstack: Convert show_trace_log_lvl() to use the new unwinder · e18bcccd
      Josh Poimboeuf 提交于
      Convert show_trace_log_lvl() to use the new unwinder.  dump_trace() has
      been deprecated.
      
      show_trace_log_lvl() is special compared to other users of the unwinder.
      It's the only place where both reliable *and* unreliable addresses are
      needed.  With frame pointers enabled, most callers of the unwinder don't
      want to know about unreliable addresses.  But in this case, when we're
      dumping the stack to the console because something presumably went
      wrong, the unreliable addresses are useful:
      
      - They show stale data on the stack which can provide useful clues.
      
      - If something goes wrong with the unwinder, or if frame pointers are
        corrupt or missing, all the stack addresses still get shown.
      
      So in order to show all addresses on the stack, and at the same time
      figure out which addresses are reliable, we have to do the scanning and
      the unwinding in parallel.
      
      The scanning is done with the help of get_stack_info() to traverse the
      stacks.  The unwinding is done separately by the new unwinder.
      
      In theory we could simplify show_trace_log_lvl() by instead pushing some
      of this logic into the unwind code.  But then we would need some kind of
      "fake" frame logic in the unwinder which would add a lot of complexity
      and wouldn't be worth it in order to support only one user.
      
      Another benefit of this approach is that once we have a DWARF unwinder,
      we should be able to just plug it in with minimal impact to this code.
      
      Another change here is that callers of show_trace_log_lvl() don't need
      to provide the 'bp' argument.  The unwinder already finds the relevant
      frame pointer by unwinding until it reaches the first frame after the
      provided stack pointer.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/703b5998604c712a1f801874b43f35d6dac52ede.1474045023.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e18bcccd
    • J
      x86/unwind: Add new unwind interface and implementations · 7c7900f8
      Josh Poimboeuf 提交于
      The x86 stack dump code is a bit of a mess.  dump_trace() uses
      callbacks, and each user of it seems to have slightly different
      requirements, so there are several slightly different callbacks floating
      around.
      
      Also there are some upcoming features which will need more changes to
      the stack dump code, including the printing of stack pt_regs, reliable
      stack detection for live patching, and a DWARF unwinder.  Each of those
      features would at least need more callbacks and/or callback interfaces,
      resulting in a much bigger mess than what we have today.
      
      Before doing all that, we should try to clean things up and replace
      dump_trace() with something cleaner and more flexible.
      
      The new unwinder is a simple state machine which was heavily inspired by
      a suggestion from Andy Lutomirski:
      
        https://lkml.kernel.org/r/CALCETrUbNTqaM2LRyXGRx=kVLRPeY5A3Pc6k4TtQxF320rUT=w@mail.gmail.com
      
      It's also similar to the libunwind API:
      
        http://www.nongnu.org/libunwind/man/libunwind(3).html
      
      Some if its advantages:
      
      - Simplicity: no more callback sprawl and less code duplication.
      
      - Flexibility: it allows the caller to stop and inspect the stack state
        at each step in the unwinding process.
      
      - Modularity: the unwinder code, console stack dump code, and stack
        metadata analysis code are all better separated so that changing one
        of them shouldn't have much of an impact on any of the others.
      
      Two implementations are added which conform to the new unwind interface:
      
      - The frame pointer unwinder which is used for CONFIG_FRAME_POINTER=y.
      
      - The "guess" unwinder which is used for CONFIG_FRAME_POINTER=n.  This
        isn't an "unwinder" per se.  All it does is scan the stack for kernel
        text addresses.  But with no frame pointers, guesses are better than
        nothing in most cases.
      Suggested-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nilay Vaish <nilayvaish@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/6dc2f909c47533d213d0505f0a113e64585bec82.1474045023.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7c7900f8
    • J
      locking/rwsem, x86: Drop a bogus cc clobber · c907420f
      Jan Beulich 提交于
      With the addition of uses of GCC's condition code outputs in commit:
      
        35ccfb71 ("x86, asm: Use CC_SET()/CC_OUT() in <asm/rwsem.h>")
      
      ... there's now an overlap of outputs and clobbers in __down_write_trylock().
      
      Such overlaps are generally getting tagged with an error (occasionally
      even with an ICE). I can't really tell why plain GCC 6.2 doesn't detect
      this (judging by the code it is meant to), while the slightly modified
      one I use does. Since condition code clobbers are never necessary on x86
      (other than perhaps for documentation purposes, which doesn't really
      get done consistently), remove it altogether rather than inventing
      something like CC_CLOBBER (to accompany CC_SET/CC_OUT).
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/57E003CC0200007800110102@prv-mh.provo.novell.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c907420f
    • D
      x86/apic: Get rid of apic_version[] array · cff9ab2b
      Denys Vlasenko 提交于
      The array has a size of MAX_LOCAL_APIC, which can be as large as 32k, so it
      can consume up to 128k.
      
      The array has been there forever and was never used for anything useful
      other than a version mismatch check which was introduced in 2009.
      
      There is no reason to store the version in an array. The kernel is not
      prepared to handle different APIC versions anyway, so the real important
      part is to detect a version mismatch and warn about it, which can be done
      with a single variable as well.
      
      [ tglx: Massaged changelog ]
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      CC: Andy Lutomirski <luto@amacapital.net>
      CC: Borislav Petkov <bp@alien8.de>
      CC: Brian Gerst <brgerst@gmail.com>
      CC: Mike Travis <travis@sgi.com>
      Link: http://lkml.kernel.org/r/20160913181232.30815-1-dvlasenk@redhat.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      cff9ab2b
    • W
      x86/apic: Order irq_enter/exit() calls correctly vs. ack_APIC_irq() · b0f48706
      Wanpeng Li 提交于
      ===============================
      [ INFO: suspicious RCU usage. ]
      4.8.0-rc6+ #5 Not tainted
      -------------------------------
      ./arch/x86/include/asm/msr-trace.h:47 suspicious rcu_dereference_check() usage!
      
      other info that might help us debug this:
      
      RCU used illegally from idle CPU!
      rcu_scheduler_active = 1, debug_locks = 0
      RCU used illegally from extended quiescent state!
      no locks held by swapper/2/0.
      
      stack backtrace:
      CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.8.0-rc6+ #5
      Hardware name: Dell Inc. OptiPlex 7020/0F5C5X, BIOS A03 01/08/2015
       0000000000000000 ffff8d1bd6003f10 ffffffff94446949 ffff8d1bd4a68000
       0000000000000001 ffff8d1bd6003f40 ffffffff940e9247 ffff8d1bbdfcf3d0
       000000000000080b 0000000000000000 0000000000000000 ffff8d1bd6003f70
      Call Trace:
       <IRQ>  [<ffffffff94446949>] dump_stack+0x99/0xd0
       [<ffffffff940e9247>] lockdep_rcu_suspicious+0xe7/0x120
       [<ffffffff9448e0d5>] do_trace_write_msr+0x135/0x140
       [<ffffffff9406e750>] native_write_msr+0x20/0x30
       [<ffffffff9406503d>] native_apic_msr_eoi_write+0x1d/0x30
       [<ffffffff9405b17e>] smp_trace_call_function_interrupt+0x1e/0x270
       [<ffffffff948cb1d6>] trace_call_function_interrupt+0x96/0xa0
       <EOI>  [<ffffffff947200f4>] ? cpuidle_enter_state+0xe4/0x360
       [<ffffffff947200df>] ? cpuidle_enter_state+0xcf/0x360
       [<ffffffff947203a7>] cpuidle_enter+0x17/0x20
       [<ffffffff940df008>] cpu_startup_entry+0x338/0x4d0
       [<ffffffff9405bfc4>] start_secondary+0x154/0x180
      
      This can be reproduced readily by running ftrace test case of kselftest.
      
      Move the irq_enter() call before ack_APIC_irq(), because irq_enter() tells
      the RCU susbstems to end the extended quiescent state, so that the
      following trace call in ack_APIC_irq() works correctly. The same applies to
      exiting_ack_irq() which calls ack_APIC_irq() after irq_exit().
      
      [ tglx: Massaged changelog ]
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Link: http://lkml.kernel.org/r/1474198491-3738-1-git-send-email-wanpeng.li@hotmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      b0f48706
  8. 16 9月, 2016 2 次提交
    • J
      x86/dumpstack: Remove NULL task pointer convention · 81539169
      Josh Poimboeuf 提交于
      show_stack_log_lvl() and friends allow a NULL pointer for the
      task_struct to indicate the current task.  This creates confusion and
      can cause sneaky bugs.
      
      Instead require the caller to pass 'current' directly.
      
      This only changes the internal workings of the dumpstack code.  The
      dump_trace() and show_stack() interfaces still allow a NULL task
      pointer.  Those interfaces should also probably be fixed as well.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      81539169
    • A
      fix minor infoleak in get_user_ex() · 1c109fab
      Al Viro 提交于
      get_user_ex(x, ptr) should zero x on failure.  It's not a lot of a leak
      (at most we are leaking uninitialized 64bit value off the kernel stack,
      and in a fairly constrained situation, at that), but the fix is trivial,
      so...
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      [ This sat in different branch from the uaccess fixes since mid-August ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1c109fab
  9. 15 9月, 2016 6 次提交
  10. 13 9月, 2016 2 次提交
  11. 09 9月, 2016 3 次提交
    • L
      x86/efi: Allow invocation of arbitrary boot services · 0a637ee6
      Lukas Wunner 提交于
      We currently allow invocation of 8 boot services with efi_call_early().
      Not included are LocateHandleBuffer and LocateProtocol in particular.
      For graphics output or to retrieve PCI ROMs and Apple device properties,
      we're thus forced to use the LocateHandle + AllocatePool + LocateHandle
      combo, which is cumbersome and needs more code.
      
      The ARM folks allow invocation of the full set of boot services but are
      restricted to our 8 boot services in functions shared across arches.
      
      Thus, rather than adding just LocateHandleBuffer and LocateProtocol to
      struct efi_config, let's rework efi_call_early() to allow invocation of
      arbitrary boot services by selecting the 64 bit vs 32 bit code path in
      the macro itself.
      
      When compiling for 32 bit or for 64 bit without mixed mode, the unused
      code path is optimized away and the binary code is the same as before.
      But on 64 bit with mixed mode enabled, this commit adds one compare
      instruction to each invocation of a boot service and, depending on the
      code path selected, two jump instructions. (Most of the time gcc
      arranges the jumps in the 32 bit code path.) The result is a minuscule
      performance penalty and the binary code becomes slightly larger and more
      difficult to read when disassembled. This isn't a hot path, so these
      drawbacks are arguably outweighed by the attainable simplification of
      the C code. We have some overhead anyway for thunking or conversion
      between calling conventions.
      
      The 8 boot services can consequently be removed from struct efi_config.
      
      No functional change intended (for now).
      
      Example -- invocation of free_pool before (64 bit code path):
      0x2d4      movq  %ds:efi_early, %rdx          ; efi_early
      0x2db      movq  %ss:arg_0-0x20(%rsp), %rsi
      0x2e0      xorl  %eax, %eax
      0x2e2      movq  %ds:0x28(%rdx), %rdi         ; efi_early->free_pool
      0x2e6      callq *%ds:0x58(%rdx)              ; efi_early->call()
      
      Example -- invocation of free_pool after (64 / 32 bit mixed code path):
      0x0dc      movq  %ds:efi_early, %rax          ; efi_early
      0x0e3      cmpb  $0, %ds:0x28(%rax)           ; !efi_early->is64 ?
      0x0e7      movq  %ds:0x20(%rax), %rdx         ; efi_early->call()
      0x0eb      movq  %ds:0x10(%rax), %rax         ; efi_early->boot_services
      0x0ef      je    $0x150
      0x0f1      movq  %ds:0x48(%rax), %rdi         ; free_pool (64 bit)
      0x0f5      xorl  %eax, %eax
      0x0f7      callq *%rdx
      ...
      0x150      movl  %ds:0x30(%rax), %edi         ; free_pool (32 bit)
      0x153      jmp   $0x0f5
      
      Size of eboot.o text section:
      CONFIG_X86_32:                         6464 before, 6318 after
      CONFIG_X86_64 && !CONFIG_EFI_MIXED:    7670 before, 7573 after
      CONFIG_X86_64 &&  CONFIG_EFI_MIXED:    7670 before, 8319 after
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      0a637ee6
    • L
      x86/efi: Optimize away setup_gop32/64 if unused · 27571616
      Lukas Wunner 提交于
      Commit 2c23b73c ("x86/efi: Prepare GOP handling code for reuse
      as generic code") introduced an efi_is_64bit() macro to x86 which
      previously only existed for arm arches.  The macro is used to
      choose between the 64 bit or 32 bit code path in gop.c at runtime.
      
      However the code path that's going to be taken is known at compile
      time when compiling for x86_32 or for x86_64 with mixed mode disabled.
      Amend the macro to eliminate the unused code path in those cases.
      
      Size of gop.o text section:
      CONFIG_X86_32:                         1758 before, 1299 after
      CONFIG_X86_64 && !CONFIG_EFI_MIXED:    2201 before, 1406 after
      CONFIG_X86_64 &&  CONFIG_EFI_MIXED:    2201 before and after
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      27571616
    • M
      efi: Refactor efi_memmap_init_early() into arch-neutral code · 9479c7ce
      Matt Fleming 提交于
      Every EFI architecture apart from ia64 needs to setup the EFI memory
      map at efi.memmap, and the code for doing that is essentially the same
      across all implementations. Therefore, it makes sense to factor this
      out into the common code under drivers/firmware/efi/.
      
      The only slight variation is the data structure out of which we pull
      the initial memory map information, such as physical address, memory
      descriptor size and version, etc. We can address this by passing a
      generic data structure (struct efi_memory_map_data) as the argument to
      efi_memmap_init_early() which contains the minimum info required for
      initialising the memory map.
      
      In the process, this patch also fixes a few undesirable implementation
      differences:
      
       - ARM and arm64 were failing to clear the EFI_MEMMAP bit when
         unmapping the early EFI memory map. EFI_MEMMAP indicates whether
         the EFI memory map is mapped (not the regions contained within) and
         can be traversed.  It's more correct to set the bit as soon as we
         memremap() the passed in EFI memmap.
      
       - Rename efi_unmmap_memmap() to efi_memmap_unmap() to adhere to the
         regular naming scheme.
      
      This patch also uses a read-write mapping for the memory map instead
      of the read-only mapping currently used on ARM and arm64. x86 needs
      the ability to update the memory map in-place when assigning virtual
      addresses to regions (efi_map_region()) and tagging regions when
      reserving boot services (efi_reserve_boot_services()).
      
      There's no way for the generic fake_mem code to know which mapping to
      use without introducing some arch-specific constant/hook, so just use
      read-write since read-only is of dubious value for the EFI memory map.
      
      Tested-by: Dave Young <dyoung@redhat.com> [kexec/kdump]
      Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [arm]
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      9479c7ce
  12. 08 9月, 2016 4 次提交
  13. 07 9月, 2016 1 次提交
    • K
      x86/uaccess: force copy_*_user() to be inlined · e6971009
      Kees Cook 提交于
      As already done with __copy_*_user(), mark copy_*_user() as __always_inline.
      Without this, the checks for things like __builtin_const_p() won't work
      consistently in either hardened usercopy nor the recent adjustments for
      detecting usercopy overflows at compile time.
      
      The change in kernel text size is detectable, but very small:
      
       text      data     bss     dec      hex     filename
      12118735  5768608 14229504 32116847 1ea106f vmlinux.before
      12120207  5768608 14229504 32118319 1ea162f vmlinux.after
      Signed-off-by: NKees Cook <keescook@chromium.org>
      e6971009
  14. 05 9月, 2016 3 次提交