1. 02 6月, 2016 6 次提交
    • P
      KVM: x86: fix OOPS after invalid KVM_SET_DEBUGREGS · d14bdb55
      Paolo Bonzini 提交于
      MOV to DR6 or DR7 causes a #GP if an attempt is made to write a 1 to
      any of bits 63:32.  However, this is not detected at KVM_SET_DEBUGREGS
      time, and the next KVM_RUN oopses:
      
         general protection fault: 0000 [#1] SMP
         CPU: 2 PID: 14987 Comm: a.out Not tainted 4.4.9-300.fc23.x86_64 #1
         Hardware name: LENOVO 2325F51/2325F51, BIOS G2ET32WW (1.12 ) 05/30/2012
         [...]
         Call Trace:
          [<ffffffffa072c93d>] kvm_arch_vcpu_ioctl_run+0x141d/0x14e0 [kvm]
          [<ffffffffa071405d>] kvm_vcpu_ioctl+0x33d/0x620 [kvm]
          [<ffffffff81241648>] do_vfs_ioctl+0x298/0x480
          [<ffffffff812418a9>] SyS_ioctl+0x79/0x90
          [<ffffffff817a0f2e>] entry_SYSCALL_64_fastpath+0x12/0x71
         Code: 55 83 ff 07 48 89 e5 77 27 89 ff ff 24 fd 90 87 80 81 0f 23 fe 5d c3 0f 23 c6 5d c3 0f 23 ce 5d c3 0f 23 d6 5d c3 0f 23 de 5d c3 <0f> 23 f6 5d c3 0f 0b 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00
         RIP  [<ffffffff810639eb>] native_set_debugreg+0x2b/0x40
          RSP <ffff88005836bd50>
      
      Testcase (beautified/reduced from syzkaller output):
      
          #include <unistd.h>
          #include <sys/syscall.h>
          #include <string.h>
          #include <stdint.h>
          #include <linux/kvm.h>
          #include <fcntl.h>
          #include <sys/ioctl.h>
      
          long r[8];
      
          int main()
          {
              struct kvm_debugregs dr = { 0 };
      
              r[2] = open("/dev/kvm", O_RDONLY);
              r[3] = ioctl(r[2], KVM_CREATE_VM, 0);
              r[4] = ioctl(r[3], KVM_CREATE_VCPU, 7);
      
              memcpy(&dr,
                     "\x5d\x6a\x6b\xe8\x57\x3b\x4b\x7e\xcf\x0d\xa1\x72"
                     "\xa3\x4a\x29\x0c\xfc\x6d\x44\x00\xa7\x52\xc7\xd8"
                     "\x00\xdb\x89\x9d\x78\xb5\x54\x6b\x6b\x13\x1c\xe9"
                     "\x5e\xd3\x0e\x40\x6f\xb4\x66\xf7\x5b\xe3\x36\xcb",
                     48);
              r[7] = ioctl(r[4], KVM_SET_DEBUGREGS, &dr);
              r[6] = ioctl(r[4], KVM_RUN, 0);
          }
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      d14bdb55
    • P
      KVM: fail KVM_SET_VCPU_EVENTS with invalid exception number · 78e546c8
      Paolo Bonzini 提交于
      This cannot be returned by KVM_GET_VCPU_EVENTS, so it is okay to return
      EINVAL.  It causes a WARN from exception_type:
      
          WARNING: CPU: 3 PID: 16732 at arch/x86/kvm/x86.c:345 exception_type+0x49/0x50 [kvm]()
          CPU: 3 PID: 16732 Comm: a.out Tainted: G        W       4.4.6-300.fc23.x86_64 #1
          Hardware name: LENOVO 2325F51/2325F51, BIOS G2ET32WW (1.12 ) 05/30/2012
           0000000000000286 000000006308a48b ffff8800bec7fcf8 ffffffff813b542e
           0000000000000000 ffffffffa0966496 ffff8800bec7fd30 ffffffff810a40f2
           ffff8800552a8000 0000000000000000 00000000002c267c 0000000000000001
          Call Trace:
           [<ffffffff813b542e>] dump_stack+0x63/0x85
           [<ffffffff810a40f2>] warn_slowpath_common+0x82/0xc0
           [<ffffffff810a423a>] warn_slowpath_null+0x1a/0x20
           [<ffffffffa0924809>] exception_type+0x49/0x50 [kvm]
           [<ffffffffa0934622>] kvm_arch_vcpu_ioctl_run+0x10a2/0x14e0 [kvm]
           [<ffffffffa091c04d>] kvm_vcpu_ioctl+0x33d/0x620 [kvm]
           [<ffffffff81241248>] do_vfs_ioctl+0x298/0x480
           [<ffffffff812414a9>] SyS_ioctl+0x79/0x90
           [<ffffffff817a04ee>] entry_SYSCALL_64_fastpath+0x12/0x71
          ---[ end trace b1a0391266848f50 ]---
      
      Testcase (beautified/reduced from syzkaller output):
      
          #include <unistd.h>
          #include <sys/syscall.h>
          #include <string.h>
          #include <stdint.h>
          #include <fcntl.h>
          #include <sys/ioctl.h>
          #include <linux/kvm.h>
      
          long r[31];
      
          int main()
          {
              memset(r, -1, sizeof(r));
              r[2] = open("/dev/kvm", O_RDONLY);
              r[3] = ioctl(r[2], KVM_CREATE_VM, 0);
              r[7] = ioctl(r[3], KVM_CREATE_VCPU, 0);
      
              struct kvm_vcpu_events ve = {
                      .exception.injected = 1,
                      .exception.nr = 0xd4
              };
              r[27] = ioctl(r[7], KVM_SET_VCPU_EVENTS, &ve);
              r[30] = ioctl(r[7], KVM_RUN, 0);
              return 0;
          }
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      78e546c8
    • P
      KVM: x86: avoid vmalloc(0) in the KVM_SET_CPUID · 83676e92
      Paolo Bonzini 提交于
      This causes an ugly dmesg splat.  Beautified syzkaller testcase:
      
          #include <unistd.h>
          #include <sys/syscall.h>
          #include <sys/ioctl.h>
          #include <fcntl.h>
          #include <linux/kvm.h>
      
          long r[8];
      
          int main()
          {
              struct kvm_cpuid2 c = { 0 };
              r[2] = open("/dev/kvm", O_RDWR);
              r[3] = ioctl(r[2], KVM_CREATE_VM, 0);
              r[4] = ioctl(r[3], KVM_CREATE_VCPU, 0x8);
              r[7] = ioctl(r[4], KVM_SET_CPUID, &c);
              return 0;
          }
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      83676e92
    • P
      kvm: x86: avoid warning on repeated KVM_SET_TSS_ADDR · b21629da
      Paolo Bonzini 提交于
      Found by syzkaller:
      
          WARNING: CPU: 3 PID: 15175 at arch/x86/kvm/x86.c:7705 __x86_set_memory_region+0x1dc/0x1f0 [kvm]()
          CPU: 3 PID: 15175 Comm: a.out Tainted: G        W       4.4.6-300.fc23.x86_64 #1
          Hardware name: LENOVO 2325F51/2325F51, BIOS G2ET32WW (1.12 ) 05/30/2012
           0000000000000286 00000000950899a7 ffff88011ab3fbf0 ffffffff813b542e
           0000000000000000 ffffffffa0966496 ffff88011ab3fc28 ffffffff810a40f2
           00000000000001fd 0000000000003000 ffff88014fc50000 0000000000000000
          Call Trace:
           [<ffffffff813b542e>] dump_stack+0x63/0x85
           [<ffffffff810a40f2>] warn_slowpath_common+0x82/0xc0
           [<ffffffff810a423a>] warn_slowpath_null+0x1a/0x20
           [<ffffffffa09251cc>] __x86_set_memory_region+0x1dc/0x1f0 [kvm]
           [<ffffffffa092521b>] x86_set_memory_region+0x3b/0x60 [kvm]
           [<ffffffffa09bb61c>] vmx_set_tss_addr+0x3c/0x150 [kvm_intel]
           [<ffffffffa092f4d4>] kvm_arch_vm_ioctl+0x654/0xbc0 [kvm]
           [<ffffffffa091d31a>] kvm_vm_ioctl+0x9a/0x6f0 [kvm]
           [<ffffffff81241248>] do_vfs_ioctl+0x298/0x480
           [<ffffffff812414a9>] SyS_ioctl+0x79/0x90
           [<ffffffff817a04ee>] entry_SYSCALL_64_fastpath+0x12/0x71
      
      Testcase:
      
          #include <unistd.h>
          #include <sys/ioctl.h>
          #include <fcntl.h>
          #include <string.h>
          #include <linux/kvm.h>
      
          long r[8];
      
          int main()
          {
              memset(r, -1, sizeof(r));
      	r[2] = open("/dev/kvm", O_RDONLY|O_TRUNC);
              r[3] = ioctl(r[2], KVM_CREATE_VM, 0x0ul);
              r[5] = ioctl(r[3], KVM_SET_TSS_ADDR, 0x20000000ul);
              r[7] = ioctl(r[3], KVM_SET_TSS_ADDR, 0x20000000ul);
              return 0;
          }
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      b21629da
    • D
      KVM: Handle MSR_IA32_PERF_CTL · 0c2df2a1
      Dmitry Bilunov 提交于
      Intel CPUs having Turbo Boost feature implement an MSR to provide a
      control interface via rdmsr/wrmsr instructions. One could detect the
      presence of this feature by issuing one of these instructions and
      handling the #GP exception which is generated in case the referenced MSR
      is not implemented by the CPU.
      
      KVM's vCPU model behaves exactly as a real CPU in this case by injecting
      a fault when MSR_IA32_PERF_CTL is called (which KVM does not support).
      However, some operating systems use this register during an early boot
      stage in which their kernel is not capable of handling #GP correctly,
      causing #DP and finally a triple fault effectively resetting the vCPU.
      
      This patch implements a dummy handler for MSR_IA32_PERF_CTL to avoid the
      crashes.
      Signed-off-by: NDmitry Bilunov <kmeaw@yandex-team.ru>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      0c2df2a1
    • N
      KVM: x86: avoid write-tearing of TDP · b19ee2ff
      Nadav Amit 提交于
      In theory, nothing prevents the compiler from write-tearing PTEs, or
      split PTE writes. These partially-modified PTEs can be fetched by other
      cores and cause mayhem. I have not really encountered such case in
      real-life, but it does seem possible.
      
      For example, the compiler may try to do something creative for
      kvm_set_pte_rmapp() and perform multiple writes to the PTE.
      Signed-off-by: NNadav Amit <nadav.amit@gmail.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      b19ee2ff
  2. 28 5月, 2016 2 次提交
    • L
      mm: remove more IS_ERR_VALUE abuses · 5d22fc25
      Linus Torvalds 提交于
      The do_brk() and vm_brk() return value was "unsigned long" and returned
      the starting address on success, and an error value on failure.  The
      reasons are entirely historical, and go back to it basically behaving
      like the mmap() interface does.
      
      However, nobody actually wanted that interface, and it causes totally
      pointless IS_ERR_VALUE() confusion.
      
      What every single caller actually wants is just the simpler integer
      return of zero for success and negative error number on failure.
      
      So just convert to that much clearer and more common calling convention,
      and get rid of all the IS_ERR_VALUE() uses wrt vm_brk().
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5d22fc25
    • R
      platform/x86: Add PMC Driver for Intel Core SoC · b740d2e9
      Rajneesh Bhardwaj 提交于
      This patch adds the Power Management Controller driver as a PCI driver
      for Intel Core SoC architecture.
      
      This driver can utilize debugging capabilities and supported features
      as exposed by the Power Management Controller.
      
      Please refer to the below specification for more details on PMC features.
      http://www.intel.in/content/www/in/en/chipsets/100-series-chipset-datasheet-vol-2.html
      
      The current version of this driver exposes SLP_S0_RESIDENCY counter.
      This counter can be used for detecting fragile SLP_S0 signal related
      failures and take corrective actions when PCH SLP_S0 signal is not
      asserted after kernel freeze as part of suspend to idle flow
      (echo freeze > /sys/power/state).
      
      Intel Platform Controller Hub (PCH) asserts SLP_S0 signal when it
      detects favorable conditions to enter its low power mode. As a
      pre-requisite the SoC should be in deepest possible Package C-State
      and devices should be in low power mode. For example, on Skylake SoC
      the deepest Package C-State is Package C10 or PC10. Suspend to idle
      flow generally leads to PC10 state but PC10 state may not be sufficient
      for realizing the platform wide power potential which SLP_S0 signal
      assertion can provide.
      
      SLP_S0 signal is often connected to the Embedded Controller (EC) and the
      Power Management IC (PMIC) for other platform power management related
      optimizations.
      
      In general, SLP_S0 assertion == PC10 + PCH low power mode + ModPhy Lanes
      power gated + PLL Idle.
      
      As part of this driver, a mechanism to read the SLP_S0_RESIDENCY is exposed
      as an API and also debugfs features are added to indicate SLP_S0 signal
      assertion residency in microseconds.
      
      echo freeze > /sys/power/state
      wake the system
      cat /sys/kernel/debug/pmc_core/slp_s0_residency_usec
      Signed-off-by: NRajneesh Bhardwaj <rajneesh.bhardwaj@intel.com>
      Signed-off-by: NVishwanath Somayaji <vishwanath.somayaji@intel.com>
      Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Signed-off-by: NDarren Hart <dvhart@linux.intel.com>
      b740d2e9
  3. 25 5月, 2016 1 次提交
    • R
      kvm:vmx: more complete state update on APICv on/off · 3ce424e4
      Roman Kagan 提交于
      The function to update APICv on/off state (in particular, to deactivate
      it when enabling Hyper-V SynIC) is incomplete: it doesn't adjust
      APICv-related fields among secondary processor-based VM-execution
      controls.  As a result, Windows 2012 guests get stuck when SynIC-based
      auto-EOI interrupt intersected with e.g. an IPI in the guest.
      
      In addition, the MSR intercept bitmap isn't updated every time "virtualize
      x2APIC mode" is toggled.  This path can only be triggered by a malicious
      guest, because Windows didn't use x2APIC but rather their own synthetic
      APIC access MSRs; however a guest running in a SynIC-enabled VM could
      switch to x2APIC and thus obtain direct access to host APIC MSRs
      (CVE-2016-4440).
      
      The patch fixes those omissions.
      Signed-off-by: NRoman Kagan <rkagan@virtuozzo.com>
      Reported-by: NSteve Rutherford <srutherford@google.com>
      Reported-by: NYang Zhang <yang.zhang.wz@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3ce424e4
  4. 24 5月, 2016 9 次提交
  5. 23 5月, 2016 2 次提交
    • L
      x86: remove more uaccess_32.h complexity · bd28b145
      Linus Torvalds 提交于
      I'm looking at trying to possibly merge the 32-bit and 64-bit versions
      of the x86 uaccess.h implementation, but first this needs to be cleaned
      up.
      
      For example, the 32-bit version of "__copy_from_user_inatomic()" is
      mostly the special cases for the constant size, and it's actually almost
      never relevant.  Most users aren't actually using a constant size
      anyway, and the few cases that do small constant copies are better off
      just using __get_user() instead.
      
      So get rid of the unnecessary complexity.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bd28b145
    • L
      x86: remove pointless uaccess_32.h complexity · 5b09c3ed
      Linus Torvalds 提交于
      I'm looking at trying to possibly merge the 32-bit and 64-bit versions
      of the x86 uaccess.h implementation, but first this needs to be cleaned
      up.
      
      For example, the 32-bit version of "__copy_to_user_inatomic()" is mostly
      the special cases for the constant size, and it's actually never
      relevant.  Every user except for one aren't actually using a constant
      size anyway, and the one user that uses it is better off just using
      __put_user() instead.
      
      So get rid of the unnecessary complexity.
      
      [ The same cleanup should likely happen to __copy_from_user_inatomic()
        as well, but that one has a lot more users that I need to take a look
        at first ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b09c3ed
  6. 22 5月, 2016 4 次提交
    • E
      um: add extended processor state save/restore support · a78ff111
      Eli Cooper 提交于
      This patch extends save_fp_registers() and restore_fp_registers() to use
      PTRACE_GETREGSET and PTRACE_SETREGSET with the XSTATE note type, adding
      support for new processor state extensions between context switches.
      
      When the new ptrace requests are unavailable, it falls back to the old
      PTRACE_GETFPREGS and PTRACE_SETFPREGS methods, which have been renamed to
      save_i387_registers() and restore_i387_registers().
      
      Now these functions expect *fp_regs to have the space of an _xstate struct.
      Thus, this also makes ptrace in UML responde to PTRACE_GETFPREGS/_SETFPREG
      requests with a user_i387_struct (thus independent from HOST_FP_SIZE), and
      by calling save_i387_registers() and restore_i387_registers() instead of
      the extended save_fp_registers() and restore_fp_registers() functions.
      Signed-off-by: NEli Cooper <elicooper@gmx.com>
      a78ff111
    • E
      um: extend fpstate to _xstate to support YMM registers · b6024b21
      Eli Cooper 提交于
      Extends fpstate to _xstate, in order to hold AVX/YMM registers.
      
      To avoid oversized stack frame, the following functions have been
      refactored by using malloc.
       - sig_handler_common
       - timer_real_alarm_handler
      Signed-off-by: NEli Cooper <elicooper@gmx.com>
      b6024b21
    • E
      um: fix FPU state preservation around signal handlers · 6de5a8a5
      Eli Cooper 提交于
      This patch makes UML saves/restores FPU state from/to the fpstate in
      pt_regs when setting up or returning from a signal stack, rather than
      calling ptrace directly. This ensures that FPU state is correctly
      preserved around signal handlers in a multi-threaded scenario.
      Signed-off-by: NEli Cooper <elicooper@gmx.com>
      6de5a8a5
    • L
      x86 isa: add back X86_32 dependency on CONFIG_ISA · 51e68d05
      Linus Torvalds 提交于
      Commit b3c1be1b ("base: isa: Remove X86_32 dependency") made ISA
      support available on x86-64 too.  That's not right - while there are
      some LPC-style devices that might be useful still and be based on
      ISA-like IP blocks, that is *not* an excuse to try to enable any random
      legacy drivers.
      
      Such drivers should be individually enabled and made to perhaps depend
      on ISA_DMA_API instead (which we have continued to support on x86-64).
      Or we could add another "ISA_XYZ_API" that we support that doesn't
      enable random old drivers that aren't even 64-bit clean nor do we have
      any test coverage for.
      
      Turning off ISA will now also turn off some drivers that have been
      marked as depending on it as part of this series, and that used to work
      on modern platforms.
      
      See for example commits ad7afc38..cc736607, which may also need
      to be reverted.
      
      This commit means that the warnings that came in due to enabling ISA
      widely are now gone again.
      Acked-by: NWilliam Breathitt Gray <vilhelm.gray@gmail.com>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      51e68d05
  7. 21 5月, 2016 5 次提交
    • P
      printk/nmi: generic solution for safe printk in NMI · 42a0bb3f
      Petr Mladek 提交于
      printk() takes some locks and could not be used a safe way in NMI
      context.
      
      The chance of a deadlock is real especially when printing stacks from
      all CPUs.  This particular problem has been addressed on x86 by the
      commit a9edc880 ("x86/nmi: Perform a safe NMI stack trace on all
      CPUs").
      
      The patchset brings two big advantages.  First, it makes the NMI
      backtraces safe on all architectures for free.  Second, it makes all NMI
      messages almost safe on all architectures (the temporary buffer is
      limited.  We still should keep the number of messages in NMI context at
      minimum).
      
      Note that there already are several messages printed in NMI context:
      WARN_ON(in_nmi()), BUG_ON(in_nmi()), anything being printed out from MCE
      handlers.  These are not easy to avoid.
      
      This patch reuses most of the code and makes it generic.  It is useful
      for all messages and architectures that support NMI.
      
      The alternative printk_func is set when entering and is reseted when
      leaving NMI context.  It queues IRQ work to copy the messages into the
      main ring buffer in a safe context.
      
      __printk_nmi_flush() copies all available messages and reset the buffer.
      Then we could use a simple cmpxchg operations to get synchronized with
      writers.  There is also used a spinlock to get synchronized with other
      flushers.
      
      We do not longer use seq_buf because it depends on external lock.  It
      would be hard to make all supported operations safe for a lockless use.
      It would be confusing and error prone to make only some operations safe.
      
      The code is put into separate printk/nmi.c as suggested by Steven
      Rostedt.  It needs a per-CPU buffer and is compiled only on
      architectures that call nmi_enter().  This is achieved by the new
      HAVE_NMI Kconfig flag.
      
      The are MN10300 and Xtensa architectures.  We need to clean up NMI
      handling there first.  Let's do it separately.
      
      The patch is heavily based on the draft from Peter Zijlstra, see
      
        https://lkml.org/lkml/2015/6/10/327
      
      [arnd@arndb.de: printk-nmi: use %zu format string for size_t]
      [akpm@linux-foundation.org: min_t->min - all types are size_t here]
      Signed-off-by: NPetr Mladek <pmladek@suse.com>
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Suggested-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Jan Kara <jack@suse.cz>
      Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>	[arm part]
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jiri Kosina <jkosina@suse.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      42a0bb3f
    • J
      exit_thread: accept a task parameter to be exited · e6464694
      Jiri Slaby 提交于
      We need to call exit_thread from copy_process in a fail path.  So make it
      accept task_struct as a parameter.
      
      [v2]
      * s390: exit_thread_runtime_instr doesn't make sense to be called for
        non-current tasks.
      * arm: fix the comment in vfp_thread_copy
      * change 'me' to 'tsk' for task_struct
      * now we can change only archs that actually have exit_thread
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e6464694
    • J
      exit_thread: remove empty bodies · 5f56a5df
      Jiri Slaby 提交于
      Define HAVE_EXIT_THREAD for archs which want to do something in
      exit_thread. For others, let's define exit_thread as an empty inline.
      
      This is a cleanup before we change the prototype of exit_thread to
      accept a task parameter.
      
      [akpm@linux-foundation.org: fix mips]
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f56a5df
    • A
      x86/kasan: instrument user memory access API · 1771c6e1
      Andrey Ryabinin 提交于
      Exchange between user and kernel memory is coded in assembly language.
      Which means that such accesses won't be spotted by KASAN as a compiler
      instruments only C code.
      
      Add explicit KASAN checks to user memory access API to ensure that
      userspace writes to (or reads from) a valid kernel memory.
      
      Note: Unlike others strncpy_from_user() is written mostly in C and KASAN
      sees memory accesses in it.  However, it makes sense to add explicit
      check for all @count bytes that *potentially* could be written to the
      kernel.
      
      [aryabinin@virtuozzo.com: move kasan check under the condition]
        Link: http://lkml.kernel.org/r/1462869209-21096-1-git-send-email-aryabinin@virtuozzo.com
      Link: http://lkml.kernel.org/r/1462538722-1574-4-git-send-email-aryabinin@virtuozzo.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1771c6e1
    • S
      ftrace/x86: Set ftrace_stub to weak to prevent gcc from using short jumps to it · 8329e818
      Steven Rostedt 提交于
      Matt Fleming reported seeing crashes when enabling and disabling
      function profiling which uses function graph tracer. Later Namhyung Kim
      hit a similar issue and he found that the issue was due to the jmp to
      ftrace_stub in ftrace_graph_call was only two bytes, and when it was
      changed to jump to the tracing code, it overwrote the ftrace_stub that
      was after it.
      
      Masami Hiramatsu bisected this down to a binutils change:
      
      8dcea93252a9ea7dff57e85220a719e2a5e8ab41 is the first bad commit
      commit 8dcea93252a9ea7dff57e85220a719e2a5e8ab41
      Author: H.J. Lu <hjl.tools@gmail.com>
      Date:   Fri May 15 03:17:31 2015 -0700
      
          Add -mshared option to x86 ELF assembler
      
          This patch adds -mshared option to x86 ELF assembler.  By default,
          assembler will optimize out non-PLT relocations against defined non-weak
          global branch targets with default visibility.  The -mshared option tells
          the assembler to generate code which may go into a shared library
          where all non-weak global branch targets with default visibility can
          be preempted.  The resulting code is slightly bigger.  This option
          only affects the handling of branch instructions.
      
      Declaring ftrace_stub as a weak call prevents gas from using two byte
      jumps to it, which would be converted to a jump to the function graph
      code.
      
      Link: http://lkml.kernel.org/r/20160516230035.1dbae571@gandalf.local.homeReported-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Reported-by: NNamhyung Kim <namhyung@kernel.org>
      Tested-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8329e818
  8. 20 5月, 2016 7 次提交
    • C
      perf/x86/intel/p4: Trival indentation fix, remove space · 1ab94188
      Colin Ian King 提交于
      Remove an extraneous space to fix up indentation. Trivial and no
      functional change
      Signed-off-by: NColin Ian King <colin.king@canonical.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1463503215-18339-1-git-send-email-colin.king@canonical.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1ab94188
    • A
      x86/mm: Switch from TASK_SIZE to TASK_SIZE_MAX in the page fault code · dc4fac84
      Andy Lutomirski 提交于
      x86's page fault handlers had two TASK_SIZE uses that should have
      been TASK_SIZE_MAX.  I don't think that either one had a visible
      effect, but this makes the code clearer and should save a few bytes
      of text.
      
      (And I eventually want to eradicate TASK_SIZE.  This will help.)
      Reported-by: NCyrill Gorcunov <gorcunov@gmail.com>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Safonov <0x7f454c46@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ruslan Kabatsayev <b7.10110111@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1242fb23b0d05c3069dbf5758ac55d26bc114bef.1462914565.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dc4fac84
    • A
      x86/fsgsbase/64: Use TASK_SIZE_MAX for FSBASE/GSBASE upper limits · d696ca01
      Andy Lutomirski 提交于
      The GSBASE upper limit exists to prevent user code from confusing
      the paranoid idtentry path.  The FSBASE upper limit is just for
      consistency.  There's no need to enforce a smaller limit for 32-bit
      tasks.
      
      Just use TASK_SIZE_MAX.  This simplifies the logic and will save a
      few bytes of code.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/5357f2fe0f103eabf005773b70722451eab09a89.1462897104.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d696ca01
    • D
      x86/mm/mpx: Work around MPX erratum SKD046 · 0f6ff2bc
      Dave Hansen 提交于
      This erratum essentially causes the CPU to forget which privilege
      level it is operating on (kernel vs. user) for the purposes of MPX.
      
      This erratum can only be triggered when a system is not using
      Supervisor Mode Execution Prevention (SMEP).  Our workaround for
      the erratum is to ensure that MPX can only be used in cases where
      SMEP is present in the processor and is enabled.
      
      This erratum only affects Core processors.  Atom is unaffected.
      But, there is no architectural way to determine Atom vs. Core.
      So, we just apply this workaround to all processors.  It's
      possible that it will mistakenly disable MPX on some Atom
      processsors or future unaffected Core processors.  There are
      currently no processors that have MPX and not SMEP.  It would
      take something akin to a hypervisor masking SMEP out on an Atom
      processor for this to present itself on current hardware.
      
      More details can be found at:
      
        http://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/desktop-6th-gen-core-family-spec-update.pdf
      
      "
        SKD046 Branch Instructions May Initialize MPX Bound Registers Incorrectly
      
        Problem:
      
        Depending on the current Intel MPX (Memory Protection
        Extensions) configuration, execution of certain branch
        instructions (near CALL, near RET, near JMP, and Jcc
        instructions) without a BND prefix (F2H) initialize the MPX bound
        registers. Due to this erratum, such a branch instruction that is
        executed both with CPL = 3 and with CPL < 3 may not use the
        correct MPX configuration register (BNDCFGU or BNDCFGS,
        respectively) for determining whether to initialize the bound
        registers; it may thus initialize the bound registers when it
        should not, or fail to initialize them when it should.
      
        Implication:
      
        A branch instruction that has executed both in user mode and in
        supervisor mode (from the same linear address) may cause a #BR
        (bound range fault) when it should not have or may not cause a
        #BR when it should have.  Workaround An operating system can
        avoid this erratum by setting CR4.SMEP[bit 20] to enable
        supervisor-mode execution prevention (SMEP). When SMEP is
        enabled, no code can be executed both with CPL = 3 and with CPL < 3.
      "
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160512220400.3B35F1BC@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0f6ff2bc
    • H
      arch: fix has_transparent_hugepage() · fd8cfd30
      Hugh Dickins 提交于
      I've just discovered that the useful-sounding has_transparent_hugepage()
      is actually an architecture-dependent minefield: on some arches it only
      builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
      not, but on some of those (arm and arm64) it then gives the wrong
      answer; and on mips alone it's marked __init, which would crash if
      called later (but so far it has not been called later).
      
      Straighten this out: make it available to all configs, with a sensible
      default in asm-generic/pgtable.h, removing its definitions from those
      arches (arc, arm, arm64, sparc, tile) which are served by the default,
      adding #define has_transparent_hugepage has_transparent_hugepage to
      those (mips, powerpc, s390, x86) which need to override the default at
      runtime, and removing the __init from mips (but maybe that kind of code
      should be avoided after init: set a static variable the first time it's
      called).
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andres Lagar-Cavilla <andreslc@google.com>
      Cc: Yang Shi <yang.shi@linaro.org>
      Cc: Ning Qu <quning@gmail.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: Vineet Gupta <vgupta@synopsys.com>		[arch/arc]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>	[arch/s390]
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd8cfd30
    • V
      x86: mm: use hugetlb_bad_size() · 2b18e532
      Vaishali Thakkar 提交于
      Update setup_hugepagesz() to call hugetlb_bad_size() when unsupported
      hugepage size is found.
      Signed-off-by: NVaishali Thakkar <vaishali.thakkar@oracle.com>
      Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
      Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b18e532
    • A
      include/linux/nodemask.h: create next_node_in() helper · 0edaf86c
      Andrew Morton 提交于
      Lots of code does
      
      	node = next_node(node, XXX);
      	if (node == MAX_NUMNODES)
      		node = first_node(XXX);
      
      so create next_node_in() to do this and use it in various places.
      
      [mhocko@suse.com: use next_node_in() helper]
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@kernel.org>
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Hui Zhu <zhuhui@xiaomi.com>
      Cc: Wang Xiaoqiang <wangxq10@lzu.edu.cn>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0edaf86c
  9. 19 5月, 2016 4 次提交