1. 31 5月, 2019 16 次提交
  2. 26 5月, 2019 6 次提交
    • J
      perf/x86/intel: Fix race in intel_pmu_disable_event() · a0b1dde1
      Jiri Olsa 提交于
      [ Upstream commit 6f55967ad9d9752813e36de6d5fdbd19741adfc7 ]
      
      New race in x86_pmu_stop() was introduced by replacing the
      atomic __test_and_clear_bit() of cpuc->active_mask by separate
      test_bit() and __clear_bit() calls in the following commit:
      
        3966c3feca3f ("x86/perf/amd: Remove need to check "running" bit in NMI handler")
      
      The race causes panic for PEBS events with enabled callchains:
      
        BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
        ...
        RIP: 0010:perf_prepare_sample+0x8c/0x530
        Call Trace:
         <NMI>
         perf_event_output_forward+0x2a/0x80
         __perf_event_overflow+0x51/0xe0
         handle_pmi_common+0x19e/0x240
         intel_pmu_handle_irq+0xad/0x170
         perf_event_nmi_handler+0x2e/0x50
         nmi_handle+0x69/0x110
         default_do_nmi+0x3e/0x100
         do_nmi+0x11a/0x180
         end_repeat_nmi+0x16/0x1a
        RIP: 0010:native_write_msr+0x6/0x20
        ...
         </NMI>
         intel_pmu_disable_event+0x98/0xf0
         x86_pmu_stop+0x6e/0xb0
         x86_pmu_del+0x46/0x140
         event_sched_out.isra.97+0x7e/0x160
        ...
      
      The event is configured to make samples from PEBS drain code,
      but when it's disabled, we'll go through NMI path instead,
      where data->callchain will not get allocated and we'll crash:
      
                x86_pmu_stop
                  test_bit(hwc->idx, cpuc->active_mask)
                  intel_pmu_disable_event(event)
                  {
                    ...
                    intel_pmu_pebs_disable(event);
                    ...
      
      EVENT OVERFLOW ->  <NMI>
                           intel_pmu_handle_irq
                             handle_pmi_common
         TEST PASSES ->        test_bit(bit, cpuc->active_mask))
                                 perf_event_overflow
                                   perf_prepare_sample
                                   {
                                     ...
                                     if (!(sample_type & __PERF_SAMPLE_CALLCHAIN_EARLY))
                                           data->callchain = perf_callchain(event, regs);
      
               CRASH ->              size += data->callchain->nr;
                                   }
                         </NMI>
                    ...
                    x86_pmu_disable_event(event)
                  }
      
                  __clear_bit(hwc->idx, cpuc->active_mask);
      
      Fixing this by disabling the event itself before setting
      off the PEBS bit.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Arcari <darcari@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Lendacky Thomas <Thomas.Lendacky@amd.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 3966c3feca3f ("x86/perf/amd: Remove need to check "running" bit in NMI handler")
      Link: http://lkml.kernel.org/r/20190504151556.31031-1-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      a0b1dde1
    • G
      x86/mm/mem_encrypt: Disable all instrumentation for early SME setup · f037116f
      Gary Hook 提交于
      [ Upstream commit b51ce3744f115850166f3d6c292b9c8cb849ad4f ]
      
      Enablement of AMD's Secure Memory Encryption feature is determined very
      early after start_kernel() is entered. Part of this procedure involves
      scanning the command line for the parameter 'mem_encrypt'.
      
      To determine intended state, the function sme_enable() uses library
      functions cmdline_find_option() and strncmp(). Their use occurs early
      enough such that it cannot be assumed that any instrumentation subsystem
      is initialized.
      
      For example, making calls to a KASAN-instrumented function before KASAN
      is set up will result in the use of uninitialized memory and a boot
      failure.
      
      When AMD's SME support is enabled, conditionally disable instrumentation
      of these dependent functions in lib/string.c and arch/x86/lib/cmdline.c.
      
       [ bp: Get rid of intermediary nostackp var and cleanup whitespace. ]
      
      Fixes: aca20d54 ("x86/mm: Add support to make use of Secure Memory Encryption")
      Reported-by: NLi RongQing <lirongqing@baidu.com>
      Signed-off-by: NGary R Hook <gary.hook@amd.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Boris Brezillon <bbrezillon@kernel.org>
      Cc: Coly Li <colyli@suse.de>
      Cc: "dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: "luto@kernel.org" <luto@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "mingo@redhat.com" <mingo@redhat.com>
      Cc: "peterz@infradead.org" <peterz@infradead.org>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/155657657552.7116.18363762932464011367.stgit@sosrh3.amd.comSigned-off-by: NSasha Levin <sashal@kernel.org>
      f037116f
    • V
      x86: kvm: hyper-v: deal with buggy TLB flush requests from WS2012 · a0a49d87
      Vitaly Kuznetsov 提交于
      [ Upstream commit da66761c2d93a46270d69001abb5692717495a68 ]
      
      It was reported that with some special Multi Processor Group configuration,
      e.g:
       bcdedit.exe /set groupsize 1
       bcdedit.exe /set maxgroup on
       bcdedit.exe /set groupaware on
      for a 16-vCPU guest WS2012 shows BSOD on boot when PV TLB flush mechanism
      is in use.
      
      Tracing kvm_hv_flush_tlb immediately reveals the issue:
      
       kvm_hv_flush_tlb: processor_mask 0x0 address_space 0x0 flags 0x2
      
      The only flag set in this request is HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES,
      however, processor_mask is 0x0 and no HV_FLUSH_ALL_PROCESSORS is specified.
      We don't flush anything and apparently it's not what Windows expects.
      
      TLFS doesn't say anything about such requests and newer Windows versions
      seem to be unaffected. This all feels like a WS2012 bug, which is, however,
      easy to workaround in KVM: let's flush everything when we see an empty
      flush request, over-flushing doesn't hurt.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      a0a49d87
    • P
      ftrace/x86_64: Emulate call function while updating in breakpoint handler · 07b487eb
      Peter Zijlstra 提交于
      commit 9e298e8604088a600d8100a111a532a9d342af09 upstream.
      
      Nicolai Stange discovered[1] that if live kernel patching is enabled, and the
      function tracer started tracing the same function that was patched, the
      conversion of the fentry call site during the translation of going from
      calling the live kernel patch trampoline to the iterator trampoline, would
      have as slight window where it didn't call anything.
      
      As live kernel patching depends on ftrace to always call its code (to
      prevent the function being traced from being called, as it will redirect
      it). This small window would allow the old buggy function to be called, and
      this can cause undesirable results.
      
      Nicolai submitted new patches[2] but these were controversial. As this is
      similar to the static call emulation issues that came up a while ago[3].
      But after some debate[4][5] adding a gap in the stack when entering the
      breakpoint handler allows for pushing the return address onto the stack to
      easily emulate a call.
      
      [1] http://lkml.kernel.org/r/20180726104029.7736-1-nstange@suse.de
      [2] http://lkml.kernel.org/r/20190427100639.15074-1-nstange@suse.de
      [3] http://lkml.kernel.org/r/3cf04e113d71c9f8e4be95fb84a510f085aa4afa.1541711457.git.jpoimboe@redhat.com
      [4] http://lkml.kernel.org/r/CAHk-=wh5OpheSU8Em_Q3Hg8qw_JtoijxOdPtHru6d+5K8TWM=A@mail.gmail.com
      [5] http://lkml.kernel.org/r/CAHk-=wjvQxY4DvPrJ6haPgAa6b906h=MwZXO6G8OtiTGe=N7_w@mail.gmail.com
      
      [
        Live kernel patching is not implemented on x86_32, thus the emulate
        calls are only for x86_64.
      ]
      
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Nicolai Stange <nstange@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: the arch/x86 maintainers <x86@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Joe Lawrence <joe.lawrence@redhat.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Mimi Zohar <zohar@linux.ibm.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nayna Jain <nayna@linux.ibm.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: "open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@vger.kernel.org>
      Cc: stable@vger.kernel.org
      Fixes: b700e7f0 ("livepatch: kernel: add support for live patching")
      Tested-by: NNicolai Stange <nstange@suse.de>
      Reviewed-by: NNicolai Stange <nstange@suse.de>
      Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      [ Changed to only implement emulated calls for x86_64 ]
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      07b487eb
    • P
      x86_64: Allow breakpoints to emulate call instructions · ba246f64
      Peter Zijlstra 提交于
      commit 4b33dadf37666c0860b88f9e52a16d07bf6d0b03 upstream.
      
      In order to allow breakpoints to emulate call instructions, they need to push
      the return address onto the stack. The x86_64 int3 handler adds a small gap
      to allow the stack to grow some. Use this gap to add the return address to
      be able to emulate a call instruction at the breakpoint location.
      
      These helper functions are added:
      
        int3_emulate_jmp(): changes the location of the regs->ip to return there.
      
       (The next two are only for x86_64)
        int3_emulate_push(): to push the address onto the gap in the stack
        int3_emulate_call(): push the return address and change regs->ip
      
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Nicolai Stange <nstange@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: the arch/x86 maintainers <x86@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Miroslav Benes <mbenes@suse.cz>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Joe Lawrence <joe.lawrence@redhat.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Mimi Zohar <zohar@linux.ibm.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Nayna Jain <nayna@linux.ibm.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: "open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@vger.kernel.org>
      Cc: stable@vger.kernel.org
      Fixes: b700e7f0 ("livepatch: kernel: add support for live patching")
      Tested-by: NNicolai Stange <nstange@suse.de>
      Reviewed-by: NNicolai Stange <nstange@suse.de>
      Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      [ Modified to only work for x86_64 and added comment to int3_emulate_push() ]
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ba246f64
    • J
      x86_64: Add gap to int3 to allow for call emulation · 01b6fdce
      Josh Poimboeuf 提交于
      commit 2700fefdb2d9751c416ad56897e27d41e409324a upstream.
      
      To allow an int3 handler to emulate a call instruction, it must be able to
      push a return address onto the stack. Add a gap to the stack to allow the
      int3 handler to push the return address and change the return from int3 to
      jump straight to the emulated called function target.
      
      Link: http://lkml.kernel.org/r/20181130183917.hxmti5josgq4clti@treble
      Link: http://lkml.kernel.org/r/20190502162133.GX2623@hirez.programming.kicks-ass.net
      
      [
        Note, this is needed to allow Live Kernel Patching to not miss calling a
        patched function when tracing is enabled. -- Steven Rostedt
      ]
      
      Cc: stable@vger.kernel.org
      Fixes: b700e7f0 ("livepatch: kernel: add support for live patching")
      Tested-by: NNicolai Stange <nstange@suse.de>
      Reviewed-by: NNicolai Stange <nstange@suse.de>
      Reviewed-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      01b6fdce
  3. 22 5月, 2019 6 次提交
  4. 17 5月, 2019 5 次提交
  5. 15 5月, 2019 7 次提交