1. 09 10月, 2017 7 次提交
  2. 06 10月, 2017 5 次提交
  3. 05 10月, 2017 3 次提交
    • J
      x86/kvm: Move kvm_fastop_exception to .fixup section · f26e6016
      Josh Poimboeuf 提交于
      When compiling the kernel with the '-frecord-gcc-switches' flag, objtool
      complains:
      
        arch/x86/kvm/emulate.o: warning: objtool: .GCC.command.line+0x0: special: can't find new instruction
      
      And also the kernel fails to link.
      
      The problem is that the 'kvm_fastop_exception' code gets placed into the
      throwaway '.GCC.command.line' section instead of '.text'.
      
      Exception fixup code is conventionally placed in the '.fixup' section,
      so put it there where it belongs.
      Reported-and-tested-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      f26e6016
    • M
      arm64: Use larger stacks when KASAN is selected · b02faed1
      Mark Rutland 提交于
      AddressSanitizer instrumentation can significantly bloat the stack, and
      with GCC 7 this can result in stack overflows at boot time in some
      configurations.
      
      We can avoid this by doubling our stack size when KASAN is in use, as is
      already done on x86 (and has been since KASAN was introduced).
      Regardless of other patches to decrease KASAN's stack utilization,
      kernels built with KASAN will always require more stack space than those
      built without, and we should take this into account.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b02faed1
    • B
      kvm/x86: Avoid async PF preempting the kernel incorrectly · a2b7861b
      Boqun Feng 提交于
      Currently, in PREEMPT_COUNT=n kernel, kvm_async_pf_task_wait() could call
      schedule() to reschedule in some cases.  This could result in
      accidentally ending the current RCU read-side critical section early,
      causing random memory corruption in the guest, or otherwise preempting
      the currently running task inside between preempt_disable and
      preempt_enable.
      
      The difficulty to handle this well is because we don't know whether an
      async PF delivered in a preemptible section or RCU read-side critical section
      for PREEMPT_COUNT=n, since preempt_disable()/enable() and rcu_read_lock/unlock()
      are both no-ops in that case.
      
      To cure this, we treat any async PF interrupting a kernel context as one
      that cannot be preempted, preventing kvm_async_pf_task_wait() from choosing
      the schedule() path in that case.
      
      To do so, a second parameter for kvm_async_pf_task_wait() is introduced,
      so that we know whether it's called from a context interrupting the
      kernel, and the parameter is set properly in all the callsites.
      
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      a2b7861b
  4. 04 10月, 2017 25 次提交