1. 30 1月, 2018 4 次提交
  2. 28 1月, 2018 4 次提交
  3. 27 1月, 2018 1 次提交
  4. 26 1月, 2018 13 次提交
  5. 25 1月, 2018 3 次提交
    • P
      perf/x86: Fix perf,x86,cpuhp deadlock · efe951d3
      Peter Zijlstra 提交于
      More lockdep gifts, a 5-way lockup race:
      
      	perf_event_create_kernel_counter()
      	  perf_event_alloc()
      	    perf_try_init_event()
      	      x86_pmu_event_init()
      		__x86_pmu_event_init()
      		  x86_reserve_hardware()
       #0		    mutex_lock(&pmc_reserve_mutex);
      		    reserve_ds_buffer()
       #1		      get_online_cpus()
      
      	perf_event_release_kernel()
      	  _free_event()
      	    hw_perf_event_destroy()
      	      x86_release_hardware()
       #0		mutex_lock(&pmc_reserve_mutex)
      		release_ds_buffer()
       #1		  get_online_cpus()
      
       #1	do_cpu_up()
      	  perf_event_init_cpu()
       #2	    mutex_lock(&pmus_lock)
       #3	    mutex_lock(&ctx->mutex)
      
      	sys_perf_event_open()
      	  mutex_lock_double()
       #3	    mutex_lock(ctx->mutex)
       #4	    mutex_lock_nested(ctx->mutex, 1);
      
      	perf_try_init_event()
       #4	  mutex_lock_nested(ctx->mutex, 1)
      	  x86_pmu_event_init()
      	    intel_pmu_hw_config()
      	      x86_add_exclusive()
       #0		mutex_lock(&pmc_reserve_mutex)
      
      Fix it by using ordering constructs instead of locking.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      efe951d3
    • P
      KVM: VMX: Make indirect call speculation safe · c940a3fb
      Peter Zijlstra 提交于
      Replace indirect call with CALL_NOSPEC.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Jun Nakajima <jun.nakajima@intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: rga@amazon.de
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: https://lkml.kernel.org/r/20180125095843.645776917@infradead.org
      c940a3fb
    • P
      KVM: x86: Make indirect calls in emulator speculation safe · 1a29b5b7
      Peter Zijlstra 提交于
      Replace the indirect calls with CALL_NOSPEC.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Jun Nakajima <jun.nakajima@intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: rga@amazon.de
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: https://lkml.kernel.org/r/20180125095843.595615683@infradead.org
      1a29b5b7
  6. 24 1月, 2018 6 次提交
    • B
      x86/microcode: Fix again accessing initrd after having been freed · 1d080f09
      Borislav Petkov 提交于
      Commit 24c25032 ("x86/microcode: Do not access the initrd after it has
      been freed") fixed attempts to access initrd from the microcode loader
      after it has been freed. However, a similar KASAN warning was reported
      (stack trace edited):
      
        smpboot: Booting Node 0 Processor 1 APIC 0x11
        ==================================================================
        BUG: KASAN: use-after-free in find_cpio_data+0x9b5/0xa50
        Read of size 1 at addr ffff880035ffd000 by task swapper/1/0
      
        CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.14.8-slack #7
        Hardware name: System manufacturer System Product Name/A88X-PLUS, BIOS 3003 03/10/2016
        Call Trace:
         dump_stack
         print_address_description
         kasan_report
         ? find_cpio_data
         __asan_report_load1_noabort
         find_cpio_data
         find_microcode_in_initrd
         __load_ucode_amd
         load_ucode_amd_ap
            load_ucode_ap
      
      After some investigation, it turned out that a merge was done using the
      wrong side to resolve, leading to picking up the previous state, before
      the 24c25032 fix. Therefore the Fixes tag below contains a merge
      commit.
      
      Revert the mismerge by catching the save_microcode_in_initrd_amd()
      retval and thus letting the function exit with the last return statement
      so that initrd_gone can be set to true.
      
      Fixes: f26483ea ("Merge branch 'x86/urgent' into x86/microcode, to resolve conflicts")
      Reported-by: <higuita@gmx.net>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=198295
      Link: https://lkml.kernel.org/r/20180123104133.918-2-bp@alien8.de
      1d080f09
    • jia zhang's avatar
      x86/microcode/intel: Extend BDW late-loading further with LLC size check · 7e702d17
      jia zhang 提交于
      Commit b94b7373 ("x86/microcode/intel: Extend BDW late-loading with a
      revision check") reduced the impact of erratum BDF90 for Broadwell model
      79.
      
      The impact can be reduced further by checking the size of the last level
      cache portion per core.
      
      Tony: "The erratum says the problem only occurs on the large-cache SKUs.
      So we only need to avoid the update if we are on a big cache SKU that is
      also running old microcode."
      
      For more details, see erratum BDF90 in document #334165 (Intel Xeon
      Processor E7-8800/4800 v4 Product Family Specification Update) from
      September 2017.
      
      Fixes: b94b7373 ("x86/microcode/intel: Extend BDW late-loading with a revision check")
      Signed-off-by: jia zhang's avatarJia Zhang <zhang.jia@linux.alibaba.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/1516321542-31161-1-git-send-email-zhang.jia@linux.alibaba.com
      7e702d17
    • X
      perf/x86/amd/power: Do not load AMD power module on !AMD platforms · 40d4071c
      Xiao Liang 提交于
      The AMD power module can be loaded on non AMD platforms, but unload fails
      with the following Oops:
      
       BUG: unable to handle kernel NULL pointer dereference at           (null)
       IP: __list_del_entry_valid+0x29/0x90
       Call Trace:
        perf_pmu_unregister+0x25/0xf0
        amd_power_pmu_exit+0x1c/0xd23 [power]
        SyS_delete_module+0x1a8/0x2b0
        ? exit_to_usermode_loop+0x8f/0xb0
        entry_SYSCALL_64_fastpath+0x20/0x83
      
      Return -ENODEV instead of 0 from the module init function if the CPU does
      not match.
      
      Fixes: c7ab62bf ("perf/x86/amd/power: Add AMD accumulated power reporting mechanism")
      Signed-off-by: NXiao Liang <xiliang@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180122061252.6394-1-xiliang@redhat.com
      40d4071c
    • W
      x86/retpoline: Remove the esp/rsp thunk · 1df37383
      Waiman Long 提交于
      It doesn't make sense to have an indirect call thunk with esp/rsp as
      retpoline code won't work correctly with the stack pointer register.
      Removing it will help compiler writers to catch error in case such
      a thunk call is emitted incorrectly.
      
      Fixes: 76b04384 ("x86/retpoline: Add initial retpoline support")
      Suggested-by: NJeff Law <law@redhat.com>
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1516658974-27852-1-git-send-email-longman@redhat.com
      1df37383
    • S
      ftrace, orc, x86: Handle ftrace dynamically allocated trampolines · 6be7fa3c
      Steven Rostedt (VMware) 提交于
      The function tracer can create a dynamically allocated trampoline that is
      called by the function mcount or fentry hook that is used to call the
      function callback that is registered. The problem is that the orc undwinder
      will bail if it encounters one of these trampolines. This breaks the stack
      trace of function callbacks, which include the stack tracer and setting the
      stack trace for individual functions.
      
      Since these dynamic trampolines are basically copies of the static ftrace
      trampolines defined in ftrace_*.S, we do not need to create new orc entries
      for the dynamic trampolines. Finding the return address on the stack will be
      identical as the functions that were copied to create the dynamic
      trampolines. When encountering a ftrace dynamic trampoline, we can just use
      the orc entry of the ftrace static function that was copied for that
      trampoline.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      6be7fa3c
    • J
      x86/ftrace: Fix ORC unwinding from ftrace handlers · e2ac83d7
      Josh Poimboeuf 提交于
      Steven Rostedt discovered that the ftrace stack tracer is broken when
      it's used with the ORC unwinder.  The problem is that objtool is
      instructed by the Makefile to ignore the ftrace_64.S code, so it doesn't
      generate any ORC data for it.
      
      Fix it by making the asm code objtool-friendly:
      
      - Objtool doesn't like the fact that save_mcount_regs pushes RBP at the
        beginning, but it's never restored (directly, at least).  So just skip
        the original RBP push, which is only needed for frame pointers anyway.
      
      - Annotate some functions as normal callable functions with
        ENTRY/ENDPROC.
      
      - Add an empty unwind hint to return_to_handler().  The return address
        isn't on the stack, so there's nothing ORC can do there.  It will just
        punt in the unlikely case it tries to unwind from that code.
      
      With all that fixed, remove the OBJECT_FILES_NON_STANDARD Makefile
      annotation so objtool can read the file.
      
      Link: http://lkml.kernel.org/r/20180123040746.ih4ep3tk4pbjvg7c@trebleReported-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      e2ac83d7
  7. 21 1月, 2018 1 次提交
  8. 19 1月, 2018 5 次提交
  9. 18 1月, 2018 1 次提交
    • T
      x86/mm: Rework wbinvd, hlt operation in stop_this_cpu() · f23d74f6
      Tom Lendacky 提交于
      Some issues have been reported with the for loop in stop_this_cpu() that
      issues the 'wbinvd; hlt' sequence.  Reverting this sequence to halt()
      has been shown to resolve the issue.
      
      However, the wbinvd is needed when running with SME.  The reason for the
      wbinvd is to prevent cache flush races between encrypted and non-encrypted
      entries that have the same physical address.  This can occur when
      kexec'ing from memory encryption active to inactive or vice-versa.  The
      important thing is to not have outside of kernel text memory references
      (such as stack usage), so the usage of the native_*() functions is needed
      since these expand as inline asm sequences.  So instead of reverting the
      change, rework the sequence.
      
      Move the wbinvd instruction outside of the for loop as native_wbinvd()
      and make its execution conditional on X86_FEATURE_SME.  In the for loop,
      change the asm 'wbinvd; hlt' sequence back to a halt sequence but use
      the native_halt() call.
      
      Fixes: bba4ed01 ("x86/mm, kexec: Allow kexec to be used with SME")
      Reported-by: NDave Young <dyoung@redhat.com>
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NDave Young <dyoung@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Yu Chen <yu.c.chen@intel.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: kexec@lists.infradead.org
      Cc: ebiederm@redhat.com
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Rui Zhang <rui.zhang@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180117234141.21184.44067.stgit@tlendack-t1.amdoffice.net
      f23d74f6
  10. 17 1月, 2018 2 次提交