1. 07 2月, 2018 2 次提交
  2. 03 2月, 2018 2 次提交
    • A
      x86/dumpstack: Avoid uninitlized variable · ebfc1501
      Arnd Bergmann 提交于
      In some configurations, 'partial' does not get initialized, as shown by
      this gcc-8 warning:
      
      arch/x86/kernel/dumpstack.c: In function 'show_trace_log_lvl':
      arch/x86/kernel/dumpstack.c:156:4: error: 'partial' may be used uninitialized in this function [-Werror=maybe-uninitialized]
          show_regs_if_on_stack(&stack_info, regs, partial);
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      This initializes it to false, to get the previous behavior in this case.
      
      Fixes: a9cdbe72 ("x86/dumpstack: Fix partial register dumps")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Borislav Petkov <bpetkov@suse.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Link: https://lkml.kernel.org/r/20180202145634.200291-1-arnd@arndb.de
      ebfc1501
    • A
      x86/pti: Mark constant arrays as __initconst · 4bf5d56d
      Arnd Bergmann 提交于
      I'm seeing build failures from the two newly introduced arrays that
      are marked 'const' and '__initdata', which are mutually exclusive:
      
      arch/x86/kernel/cpu/common.c:882:43: error: 'cpu_no_speculation' causes a section type conflict with 'e820_table_firmware_init'
      arch/x86/kernel/cpu/common.c:895:43: error: 'cpu_no_meltdown' causes a section type conflict with 'e820_table_firmware_init'
      
      The correct annotation is __initconst.
      
      Fixes: fec9434a ("x86/pti: Do not enable PTI on CPUs which are not vulnerable to Meltdown")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/20180202213959.611210-1-arnd@arndb.de
      4bf5d56d
  3. 02 2月, 2018 1 次提交
  4. 31 1月, 2018 8 次提交
    • J
      x86/paravirt: Remove 'noreplace-paravirt' cmdline option · 12c69f1e
      Josh Poimboeuf 提交于
      The 'noreplace-paravirt' option disables paravirt patching, leaving the
      original pv indirect calls in place.
      
      That's highly incompatible with retpolines, unless we want to uglify
      paravirt even further and convert the paravirt calls to retpolines.
      
      As far as I can tell, the option doesn't seem to be useful for much
      other than introducing surprising corner cases and making the kernel
      vulnerable to Spectre v2.  It was probably a debug option from the early
      paravirt days.  So just remove it.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Jun Nakajima <jun.nakajima@intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Asit Mallick <asit.k.mallick@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Arjan Van De Ven <arjan.van.de.ven@intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Link: https://lkml.kernel.org/r/20180131041333.2x6blhxirc2kclrq@treble
      12c69f1e
    • K
      x86/kexec: Make kexec (mostly) work in 5-level paging mode · 5bf30316
      Kirill A. Shutemov 提交于
      Currently kexec() will crash when switching into a 5-level paging
      enabled kernel.
      
      I missed that we need to change relocate_kernel() to set CR4.LA57
      flag if the kernel has 5-level paging enabled.
      
      I avoided using #ifdef CONFIG_X86_5LEVEL here and inferred if we need to
      enable 5-level paging from previous CR4 value. This way the code is
      ready for boot-time switching between paging modes.
      
      With this patch applied, in addition to kexec 4-to-4 which always worked,
      we can kexec 4-to-5 and 5-to-5 - while 5-to-4 will need more work.
      Reported-by: NBaoquan He <bhe@redhat.com>
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Tested-by: NBaoquan He <bhe@redhat.com>
      Cc: <stable@vger.kernel.org> # v4.14+
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Fixes: 77ef56e4 ("x86: Enable 5-level paging support via CONFIG_X86_5LEVEL=y")
      Link: http://lkml.kernel.org/r/20180129110845.26633-1-kirill.shutemov@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5bf30316
    • V
      x86/irq: Count Hyper-V reenlightenment interrupts · 51d4e5da
      Vitaly Kuznetsov 提交于
      Hyper-V reenlightenment interrupts arrive when the VM is migrated, While
      they are not interesting in general it's important when L2 nested guests
      are running.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: kvm@vger.kernel.org
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: "Michael Kelley (EOSG)" <Michael.H.Kelley@microsoft.com>
      Cc: Roman Kagan <rkagan@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: devel@linuxdriverproject.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Cathy Avery <cavery@redhat.com>
      Cc: Mohammed Gamal <mmorsy@redhat.com>
      Link: https://lkml.kernel.org/r/20180124132337.30138-6-vkuznets@redhat.com
      51d4e5da
    • V
      x86/hyperv: Reenlightenment notifications support · 93286261
      Vitaly Kuznetsov 提交于
      Hyper-V supports Live Migration notification. This is supposed to be used
      in conjunction with TSC emulation: when a VM is migrated to a host with
      different TSC frequency for some short period the host emulates the
      accesses to TSC and sends an interrupt to notify about the event. When the
      guest is done updating everything it can disable TSC emulation and
      everything will start working fast again.
      
      These notifications weren't required until now as Hyper-V guests are not
      supposed to use TSC as a clocksource: in Linux the TSC is even marked as
      unstable on boot. Guests normally use 'tsc page' clocksource and host
      updates its values on migrations automatically.
      
      Things change when with nested virtualization: even when the PV
      clocksources (kvm-clock or tsc page) are passed through to the nested
      guests the TSC frequency and frequency changes need to be know..
      
      Hyper-V Top Level Functional Specification (as of v5.0b) wrongly specifies
      EAX:BIT(12) of CPUID:0x40000009 as the feature identification bit. The
      right one to check is EAX:BIT(13) of CPUID:0x40000003. I was assured that
      the fix in on the way.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Cc: kvm@vger.kernel.org
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: "Michael Kelley (EOSG)" <Michael.H.Kelley@microsoft.com>
      Cc: Roman Kagan <rkagan@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: devel@linuxdriverproject.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Cathy Avery <cavery@redhat.com>
      Cc: Mohammed Gamal <mmorsy@redhat.com>
      Link: https://lkml.kernel.org/r/20180124132337.30138-4-vkuznets@redhat.com
      93286261
    • D
      x86/cpuid: Fix up "virtual" IBRS/IBPB/STIBP feature bits on Intel · 7fcae111
      David Woodhouse 提交于
      Despite the fact that all the other code there seems to be doing it, just
      using set_cpu_cap() in early_intel_init() doesn't actually work.
      
      For CPUs with PKU support, setup_pku() calls get_cpu_cap() after
      c->c_init() has set those feature bits. That resets those bits back to what
      was queried from the hardware.
      
      Turning the bits off for bad microcode is easy to fix. That can just use
      setup_clear_cpu_cap() to force them off for all CPUs.
      
      I was less keen on forcing the feature bits *on* that way, just in case
      of inconsistencies. I appreciate that the kernel is going to get this
      utterly wrong if CPU features are not consistent, because it has already
      applied alternatives by the time secondary CPUs are brought up.
      
      But at least if setup_force_cpu_cap() isn't being used, we might have a
      chance of *detecting* the lack of the corresponding bit and either
      panicking or refusing to bring the offending CPU online.
      
      So ensure that the appropriate feature bits are set within get_cpu_cap()
      regardless of how many extra times it's called.
      
      Fixes: 2961298e ("x86/cpufeatures: Clean up Spectre v2 related CPUID flags")
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: karahmed@amazon.de
      Cc: peterz@infradead.org
      Cc: bp@alien8.de
      Link: https://lkml.kernel.org/r/1517322623-15261-1-git-send-email-dwmw@amazon.co.uk
      7fcae111
    • C
      x86/spectre: Fix spelling mistake: "vunerable"-> "vulnerable" · e698dcdf
      Colin Ian King 提交于
      Trivial fix to spelling mistake in pr_err error message text.
      Signed-off-by: NColin Ian King <colin.king@canonical.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: kernel-janitors@vger.kernel.org
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/20180130193218.9271-1-colin.king@canonical.com
      e698dcdf
    • D
      x86/spectre: Report get_user mitigation for spectre_v1 · edfbae53
      Dan Williams 提交于
      Reflect the presence of get_user(), __get_user(), and 'syscall' protections
      in sysfs. The expectation is that new and better tooling will allow the
      kernel to grow more usages of array_index_nospec(), for now, only claim
      mitigation for __user pointer de-references.
      Reported-by: NJiri Slaby <jslaby@suse.cz>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: kernel-hardening@lists.openwall.com
      Cc: gregkh@linuxfoundation.org
      Cc: torvalds@linux-foundation.org
      Cc: alan@linux.intel.com
      Link: https://lkml.kernel.org/r/151727420158.33451.11658324346540434635.stgit@dwillia2-desk3.amr.corp.intel.com
      edfbae53
    • R
      x86: remove arch specific early_init_dt_alloc_memory_arch · b75e250a
      Rob Herring 提交于
      Now that the DT core code handles bootmem arches, we can remove the x86
      specific early_init_dt_alloc_memory_arch function.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Signed-off-by: NRob Herring <robh@kernel.org>
      b75e250a
  5. 30 1月, 2018 2 次提交
  6. 28 1月, 2018 3 次提交
  7. 27 1月, 2018 1 次提交
  8. 26 1月, 2018 7 次提交
  9. 24 1月, 2018 6 次提交
    • D
      x86/centaur: Mark TSC invariant · fe6daab1
      davidwang 提交于
      Centaur CPU has a constant frequency TSC and that TSC does not stop in
      C-States. But because the corresponding TSC feature flags are not set for
      that CPU, the TSC is treated as not constant frequency and assumed to stop
      in C-States, which makes it an unreliable and unusable clock source.
      
      Setting those flags tells the kernel that the TSC is usable, so it will
      select it over HPET.  The effect of this is that reading time stamps (from
      kernel or user space) will be faster and more efficent.
      Signed-off-by: Ndavidwang <davidwang@zhaoxin.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: qiyuanwang@zhaoxin.com
      Cc: linux-pm@vger.kernel.org
      Cc: brucechang@via-alliance.com
      Cc: cooperyan@zhaoxin.com
      Cc: benjaminpan@viatech.com
      Link: https://lkml.kernel.org/r/1516616057-5158-1-git-send-email-davidwang@zhaoxin.com
      fe6daab1
    • B
      x86/microcode: Fix again accessing initrd after having been freed · 1d080f09
      Borislav Petkov 提交于
      Commit 24c25032 ("x86/microcode: Do not access the initrd after it has
      been freed") fixed attempts to access initrd from the microcode loader
      after it has been freed. However, a similar KASAN warning was reported
      (stack trace edited):
      
        smpboot: Booting Node 0 Processor 1 APIC 0x11
        ==================================================================
        BUG: KASAN: use-after-free in find_cpio_data+0x9b5/0xa50
        Read of size 1 at addr ffff880035ffd000 by task swapper/1/0
      
        CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.14.8-slack #7
        Hardware name: System manufacturer System Product Name/A88X-PLUS, BIOS 3003 03/10/2016
        Call Trace:
         dump_stack
         print_address_description
         kasan_report
         ? find_cpio_data
         __asan_report_load1_noabort
         find_cpio_data
         find_microcode_in_initrd
         __load_ucode_amd
         load_ucode_amd_ap
            load_ucode_ap
      
      After some investigation, it turned out that a merge was done using the
      wrong side to resolve, leading to picking up the previous state, before
      the 24c25032 fix. Therefore the Fixes tag below contains a merge
      commit.
      
      Revert the mismerge by catching the save_microcode_in_initrd_amd()
      retval and thus letting the function exit with the last return statement
      so that initrd_gone can be set to true.
      
      Fixes: f26483ea ("Merge branch 'x86/urgent' into x86/microcode, to resolve conflicts")
      Reported-by: <higuita@gmx.net>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=198295
      Link: https://lkml.kernel.org/r/20180123104133.918-2-bp@alien8.de
      1d080f09
    • jia zhang's avatar
      x86/microcode/intel: Extend BDW late-loading further with LLC size check · 7e702d17
      jia zhang 提交于
      Commit b94b7373 ("x86/microcode/intel: Extend BDW late-loading with a
      revision check") reduced the impact of erratum BDF90 for Broadwell model
      79.
      
      The impact can be reduced further by checking the size of the last level
      cache portion per core.
      
      Tony: "The erratum says the problem only occurs on the large-cache SKUs.
      So we only need to avoid the update if we are on a big cache SKU that is
      also running old microcode."
      
      For more details, see erratum BDF90 in document #334165 (Intel Xeon
      Processor E7-8800/4800 v4 Product Family Specification Update) from
      September 2017.
      
      Fixes: b94b7373 ("x86/microcode/intel: Extend BDW late-loading with a revision check")
      Signed-off-by: jia zhang's avatarJia Zhang <zhang.jia@linux.alibaba.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/1516321542-31161-1-git-send-email-zhang.jia@linux.alibaba.com
      7e702d17
    • S
      ftrace, orc, x86: Handle ftrace dynamically allocated trampolines · 6be7fa3c
      Steven Rostedt (VMware) 提交于
      The function tracer can create a dynamically allocated trampoline that is
      called by the function mcount or fentry hook that is used to call the
      function callback that is registered. The problem is that the orc undwinder
      will bail if it encounters one of these trampolines. This breaks the stack
      trace of function callbacks, which include the stack tracer and setting the
      stack trace for individual functions.
      
      Since these dynamic trampolines are basically copies of the static ftrace
      trampolines defined in ftrace_*.S, we do not need to create new orc entries
      for the dynamic trampolines. Finding the return address on the stack will be
      identical as the functions that were copied to create the dynamic
      trampolines. When encountering a ftrace dynamic trampoline, we can just use
      the orc entry of the ftrace static function that was copied for that
      trampoline.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      6be7fa3c
    • J
      x86/ftrace: Fix ORC unwinding from ftrace handlers · e2ac83d7
      Josh Poimboeuf 提交于
      Steven Rostedt discovered that the ftrace stack tracer is broken when
      it's used with the ORC unwinder.  The problem is that objtool is
      instructed by the Makefile to ignore the ftrace_64.S code, so it doesn't
      generate any ORC data for it.
      
      Fix it by making the asm code objtool-friendly:
      
      - Objtool doesn't like the fact that save_mcount_regs pushes RBP at the
        beginning, but it's never restored (directly, at least).  So just skip
        the original RBP push, which is only needed for frame pointers anyway.
      
      - Annotate some functions as normal callable functions with
        ENTRY/ENDPROC.
      
      - Add an empty unwind hint to return_to_handler().  The return address
        isn't on the stack, so there's nothing ORC can do there.  It will just
        punt in the unlikely case it tries to unwind from that code.
      
      With all that fixed, remove the OBJECT_FILES_NON_STANDARD Makefile
      annotation so objtool can read the file.
      
      Link: http://lkml.kernel.org/r/20180123040746.ih4ep3tk4pbjvg7c@trebleReported-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      e2ac83d7
    • E
      mm/memory_failure: Remove unused trapno from memory_failure · 83b57531
      Eric W. Biederman 提交于
      Today 4 architectures set ARCH_SUPPORTS_MEMORY_FAILURE (arm64, parisc,
      powerpc, and x86), while 4 other architectures set __ARCH_SI_TRAPNO
      (alpha, metag, sparc, and tile).  These two sets of architectures do
      not interesect so remove the trapno paramater to remove confusion.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      83b57531
  10. 20 1月, 2018 2 次提交
  11. 19 1月, 2018 3 次提交
  12. 18 1月, 2018 3 次提交