1. 19 1月, 2018 5 次提交
  2. 17 1月, 2018 1 次提交
    • P
      x86/cpufeature: Move processor tracing out of scattered features · 4fdec203
      Paolo Bonzini 提交于
      Processor tracing is already enumerated in word 9 (CPUID[7,0].EBX),
      so do not duplicate it in the scattered features word.
      
      Besides being more tidy, this will be useful for KVM when it presents
      processor tracing to the guests.  KVM selects host features that are
      supported by both the host kernel (depending on command line options,
      CPU errata, or whatever) and KVM.  Whenever a full feature word exists,
      KVM's code is written in the expectation that the CPUID bit number
      matches the X86_FEATURE_* bit number, but this is not the case for
      X86_FEATURE_INTEL_PT.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luwei Kang <luwei.kang@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm@vger.kernel.org
      Link: http://lkml.kernel.org/r/1516117345-34561-1-git-send-email-pbonzini@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      4fdec203
  3. 15 1月, 2018 4 次提交
    • T
      x86/retpoline: Add LFENCE to the retpoline/RSB filling RSB macros · 28d437d5
      Tom Lendacky 提交于
      The PAUSE instruction is currently used in the retpoline and RSB filling
      macros as a speculation trap.  The use of PAUSE was originally suggested
      because it showed a very, very small difference in the amount of
      cycles/time used to execute the retpoline as compared to LFENCE.  On AMD,
      the PAUSE instruction is not a serializing instruction, so the pause/jmp
      loop will use excess power as it is speculated over waiting for return
      to mispredict to the correct target.
      
      The RSB filling macro is applicable to AMD, and, if software is unable to
      verify that LFENCE is serializing on AMD (possible when running under a
      hypervisor), the generic retpoline support will be used and, so, is also
      applicable to AMD.  Keep the current usage of PAUSE for Intel, but add an
      LFENCE instruction to the speculation trap for AMD.
      
      The same sequence has been adopted by GCC for the GCC generated retpolines.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Acked-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Kees Cook <keescook@google.com>
      Link: https://lkml.kernel.org/r/20180113232730.31060.36287.stgit@tlendack-t1.amdoffice.net
      28d437d5
    • D
      x86/retpoline: Fill RSB on context switch for affected CPUs · c995efd5
      David Woodhouse 提交于
      On context switch from a shallow call stack to a deeper one, as the CPU
      does 'ret' up the deeper side it may encounter RSB entries (predictions for
      where the 'ret' goes to) which were populated in userspace.
      
      This is problematic if neither SMEP nor KPTI (the latter of which marks
      userspace pages as NX for the kernel) are active, as malicious code in
      userspace may then be executed speculatively.
      
      Overwrite the CPU's return prediction stack with calls which are predicted
      to return to an infinite loop, to "capture" speculation if this
      happens. This is required both for retpoline, and also in conjunction with
      IBRS for !SMEP && !KPTI.
      
      On Skylake+ the problem is slightly different, and an *underflow* of the
      RSB may cause errant branch predictions to occur. So there it's not so much
      overwrite, as *filling* the RSB to attempt to prevent it getting
      empty. This is only a partial solution for Skylake+ since there are many
      other conditions which may result in the RSB becoming empty. The full
      solution on Skylake+ is to use IBRS, which will prevent the problem even
      when the RSB becomes empty. With IBRS, the RSB-stuffing will not be
      required on context switch.
      
      [ tglx: Added missing vendor check and slighty massaged comments and
        	changelog ]
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515779365-9032-1-git-send-email-dwmw@amazon.co.uk
      c995efd5
    • A
      x86/kasan: Panic if there is not enough memory to boot · 0d39e266
      Andrey Ryabinin 提交于
      Currently KASAN doesn't panic in case it don't have enough memory
      to boot. Instead, it crashes in some random place:
      
       kernel BUG at arch/x86/mm/physaddr.c:27!
      
       RIP: 0010:__phys_addr+0x268/0x276
       Call Trace:
        kasan_populate_shadow+0x3f2/0x497
        kasan_init+0x12e/0x2b2
        setup_arch+0x2825/0x2a2c
        start_kernel+0xc8/0x15f4
        x86_64_start_reservations+0x2a/0x2c
        x86_64_start_kernel+0x72/0x75
        secondary_startup_64+0xa5/0xb0
      
      Use memblock_virt_alloc_try_nid() for allocations without failure
      fallback. It will panic with an out of memory message.
      Reported-by: Nkernel test robot <xiaolong.ye@intel.com>
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Alexander Potapenko <glider@google.com>
      Cc: lkp@01.org
      Link: https://lkml.kernel.org/r/20180110153602.18919-1-aryabinin@virtuozzo.com
      0d39e266
    • T
      x86/retpoline: Remove compile time warning · b8b9ce4b
      Thomas Gleixner 提交于
      Remove the compile time warning when CONFIG_RETPOLINE=y and the compiler
      does not have retpoline support. Linus rationale for this is:
      
        It's wrong because it will just make people turn off RETPOLINE, and the
        asm updates - and return stack clearing - that are independent of the
        compiler are likely the most important parts because they are likely the
        ones easiest to target.
      
        And it's annoying because most people won't be able to do anything about
        it. The number of people building their own compiler? Very small. So if
        their distro hasn't got a compiler yet (and pretty much nobody does), the
        warning is just annoying crap.
      
        It is already properly reported as part of the sysfs interface. The
        compile-time warning only encourages bad things.
      
      Fixes: 76b04384 ("x86/retpoline: Add initial retpoline support")
      Requested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: thomas.lendacky@amd.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Link: https://lkml.kernel.org/r/CA+55aFzWgquv4i6Mab6bASqYXg3ErV3XDFEYf=GEcCDQg5uAtw@mail.gmail.com
      b8b9ce4b
  4. 14 1月, 2018 2 次提交
    • P
      x86,perf: Disable intel_bts when PTI · 99a9dc98
      Peter Zijlstra 提交于
      The intel_bts driver does not use the 'normal' BTS buffer which is exposed
      through the cpu_entry_area but instead uses the memory allocated for the
      perf AUX buffer.
      
      This obviously comes apart when using PTI because then the kernel mapping;
      which includes that AUX buffer memory; disappears. Fixing this requires to
      expose a mapping which is visible in all context and that's not trivial.
      
      As a quick fix disable this driver when PTI is enabled to prevent
      malfunction.
      
      Fixes: 385ce0ea ("x86/mm/pti: Add Kconfig")
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Reported-by: NRobert Święcki <robert@swiecki.net>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: greg@kroah.com
      Cc: hughd@google.com
      Cc: luto@amacapital.net
      Cc: Vince Weaver <vince@deater.net>
      Cc: torvalds@linux-foundation.org
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180114102713.GB6166@worktop.programming.kicks-ass.net
      99a9dc98
    • T
      x86/pti: Fix !PCID and sanitize defines · f10ee3dc
      Thomas Gleixner 提交于
      The switch to the user space page tables in the low level ASM code sets
      unconditionally bit 12 and bit 11 of CR3. Bit 12 is switching the base
      address of the page directory to the user part, bit 11 is switching the
      PCID to the PCID associated with the user page tables.
      
      This fails on a machine which lacks PCID support because bit 11 is set in
      CR3. Bit 11 is reserved when PCID is inactive.
      
      While the Intel SDM claims that the reserved bits are ignored when PCID is
      disabled, the AMD APM states that they should be cleared.
      
      This went unnoticed as the AMD APM was not checked when the code was
      developed and reviewed and test systems with Intel CPUs never failed to
      boot. The report is against a Centos 6 host where the guest fails to boot,
      so it's not yet clear whether this is a virt issue or can happen on real
      hardware too, but thats irrelevant as the AMD APM clearly ask for clearing
      the reserved bits.
      
      Make sure that on non PCID machines bit 11 is not set by the page table
      switching code.
      
      Andy suggested to rename the related bits and masks so they are clearly
      describing what they should be used for, which is done as well for clarity.
      
      That split could have been done with alternatives but the macro hell is
      horrible and ugly. This can be done on top if someone cares to remove the
      extra orq. For now it's a straight forward fix.
      
      Fixes: 6fd166aa ("x86/mm: Use/Fix PCID to optimize user/kernel switches")
      Reported-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable <stable@vger.kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801140009150.2371@nanos
      f10ee3dc
  5. 12 1月, 2018 11 次提交
  6. 11 1月, 2018 1 次提交
  7. 09 1月, 2018 4 次提交
    • T
      x86/cpu/AMD: Use LFENCE_RDTSC in preference to MFENCE_RDTSC · 9c6a73c7
      Tom Lendacky 提交于
      With LFENCE now a serializing instruction, use LFENCE_RDTSC in preference
      to MFENCE_RDTSC.  However, since the kernel could be running under a
      hypervisor that does not support writing that MSR, read the MSR back and
      verify that the bit has been set successfully.  If the MSR can be read
      and the bit is set, then set the LFENCE_RDTSC feature, otherwise set the
      MFENCE_RDTSC feature.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NReviewed-by: Borislav Petkov <bp@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/20180108220932.12580.52458.stgit@tlendack-t1.amdoffice.net
      9c6a73c7
    • T
      x86/cpu/AMD: Make LFENCE a serializing instruction · e4d0e84e
      Tom Lendacky 提交于
      To aid in speculation control, make LFENCE a serializing instruction
      since it has less overhead than MFENCE.  This is done by setting bit 1
      of MSR 0xc0011029 (DE_CFG).  Some families that support LFENCE do not
      have this MSR.  For these families, the LFENCE instruction is already
      serializing.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NReviewed-by: Borislav Petkov <bp@suse.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/20180108220921.12580.71694.stgit@tlendack-t1.amdoffice.net
      e4d0e84e
    • J
      x86/mm/pti: Remove dead logic in pti_user_pagetable_walk*() · 8d56eff2
      Jike Song 提交于
      The following code contains dead logic:
      
       162 if (pgd_none(*pgd)) {
       163         unsigned long new_p4d_page = __get_free_page(gfp);
       164         if (!new_p4d_page)
       165                 return NULL;
       166
       167         if (pgd_none(*pgd)) {
       168                 set_pgd(pgd, __pgd(_KERNPG_TABLE | __pa(new_p4d_page)));
       169                 new_p4d_page = 0;
       170         }
       171         if (new_p4d_page)
       172                 free_page(new_p4d_page);
       173 }
      
      There can't be any difference between two pgd_none(*pgd) at L162 and L167,
      so it's always false at L171.
      
      Dave Hansen explained:
      
       Yes, the double-test was part of an optimization where we attempted to
       avoid using a global spinlock in the fork() path.  We would check for
       unallocated mid-level page tables without the lock.  The lock was only
       taken when we needed to *make* an entry to avoid collisions.
       
       Now that it is all single-threaded, there is no chance of a collision,
       no need for a lock, and no need for the re-check.
      
      As all these functions are only called during init, mark them __init as
      well.
      
      Fixes: 03f4424f ("x86/mm/pti: Add functions to clone kernel PMDs")
      Signed-off-by: NJike Song <albcamus@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Jiri Koshina <jikos@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Kees Cook <keescook@google.com>
      Cc: Andi Lutomirski <luto@amacapital.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg KH <gregkh@linux-foundation.org>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Paul Turner <pjt@google.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180108160341.3461-1-albcamus@gmail.com
      8d56eff2
    • D
      x86/tboot: Unbreak tboot with PTI enabled · 262b6b30
      Dave Hansen 提交于
      This is another case similar to what EFI does: create a new set of
      page tables, map some code at a low address, and jump to it.  PTI
      mistakes this low address for userspace and mistakenly marks it
      non-executable in an effort to make it unusable for userspace.
      
      Undo the poison to allow execution.
      
      Fixes: 385ce0ea ("x86/mm/pti: Add Kconfig")
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Jon Masters <jcm@redhat.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Jeff Law <law@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: David" <dwmw@amazon.co.uk>
      Cc: Nick Clifton <nickc@redhat.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180108102805.GK25546@redhat.com
      262b6b30
  8. 08 1月, 2018 1 次提交
  9. 07 1月, 2018 2 次提交
  10. 05 1月, 2018 7 次提交
    • T
      x86/pti: Rename BUG_CPU_INSECURE to BUG_CPU_MELTDOWN · de791821
      Thomas Gleixner 提交于
      Use the name associated with the particular attack which needs page table
      isolation for mitigation.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Cc: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
      Cc: Jiri Koshina <jikos@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Lutomirski  <luto@amacapital.net>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Turner <pjt@google.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Greg KH <gregkh@linux-foundation.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801051525300.1724@nanos
      de791821
    • D
      x86/alternatives: Add missing '\n' at end of ALTERNATIVE inline asm · b9e705ef
      David Woodhouse 提交于
      Where an ALTERNATIVE is used in the middle of an inline asm block, this
      would otherwise lead to the following instruction being appended directly
      to the trailing ".popsection", and a failed compile.
      
      Fixes: 9cebed42 ("x86, alternative: Use .pushsection/.popsection")
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: ak@linux.intel.com
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Turner <pjt@google.com>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180104143710.8961-8-dwmw@amazon.co.uk
      b9e705ef
    • T
      x86/tlb: Drop the _GPL from the cpu_tlbstate export · 1e547681
      Thomas Gleixner 提交于
      The recent changes for PTI touch cpu_tlbstate from various tlb_flush
      inlines. cpu_tlbstate is exported as GPL symbol, so this causes a
      regression when building out of tree drivers for certain graphics cards.
      
      Aside of that the export was wrong since it was introduced as it should
      have been EXPORT_PER_CPU_SYMBOL_GPL().
      
      Use the correct PER_CPU export and drop the _GPL to restore the previous
      state which allows users to utilize the cards they payed for.
      
      As always I'm really thrilled to make this kind of change to support the
      #friends (or however the hot hashtag of today is spelled) from that closet
      sauce graphics corp.
      
      Fixes: 1e02ce4c ("x86: Store a per-cpu shadow copy of CR4")
      Fixes: 6fd166aa ("x86/mm: Use/Fix PCID to optimize user/kernel switches")
      Reported-by: NKees Cook <keescook@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: stable@vger.kernel.org
      1e547681
    • P
      x86/events/intel/ds: Use the proper cache flush method for mapping ds buffers · 42f3bdc5
      Peter Zijlstra 提交于
      Thomas reported the following warning:
      
       BUG: using smp_processor_id() in preemptible [00000000] code: ovsdb-server/4498
       caller is native_flush_tlb_single+0x57/0xc0
       native_flush_tlb_single+0x57/0xc0
       __set_pte_vaddr+0x2d/0x40
       set_pte_vaddr+0x2f/0x40
       cea_set_pte+0x30/0x40
       ds_update_cea.constprop.4+0x4d/0x70
       reserve_ds_buffers+0x159/0x410
       x86_reserve_hardware+0x150/0x160
       x86_pmu_event_init+0x3e/0x1f0
       perf_try_init_event+0x69/0x80
       perf_event_alloc+0x652/0x740
       SyS_perf_event_open+0x3f6/0xd60
       do_syscall_64+0x5c/0x190
      
      set_pte_vaddr is used to map the ds buffers into the cpu entry area, but
      there are two problems with that:
      
       1) The resulting flush is not supposed to be called in preemptible context
      
       2) The cpu entry area is supposed to be per CPU, but the debug store
          buffers are mapped for all CPUs so these mappings need to be flushed
          globally.
      
      Add the necessary preemption protection across the mapping code and flush
      TLBs globally.
      
      Fixes: c1961a46 ("x86/events/intel/ds: Map debug buffers in cpu_entry_area")
      Reported-by: NThomas Zeitlhofer <thomas.zeitlhofer+lkml@ze-it.at>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NThomas Zeitlhofer <thomas.zeitlhofer+lkml@ze-it.at>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180104170712.GB3040@hirez.programming.kicks-ass.net
      42f3bdc5
    • T
      x86/kaslr: Fix the vaddr_end mess · 1dddd251
      Thomas Gleixner 提交于
      vaddr_end for KASLR is only documented in the KASLR code itself and is
      adjusted depending on config options. So it's not surprising that a change
      of the memory layout causes KASLR to have the wrong vaddr_end. This can map
      arbitrary stuff into other areas causing hard to understand problems.
      
      Remove the whole ifdef magic and define the start of the cpu_entry_area to
      be the end of the KASLR vaddr range.
      
      Add documentation to that effect.
      
      Fixes: 92a0f81d ("x86/cpu_entry_area: Move it out of the fixmap")
      Reported-by: NBenjamin Gilbert <benjamin.gilbert@coreos.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NBenjamin Gilbert <benjamin.gilbert@coreos.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: stable <stable@vger.kernel.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Garnier <thgarnie@google.com>,
      Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
      Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
      1dddd251
    • T
      x86/mm: Map cpu_entry_area at the same place on 4/5 level · f2078904
      Thomas Gleixner 提交于
      There is no reason for 4 and 5 level pagetables to have a different
      layout. It just makes determining vaddr_end for KASLR harder than
      necessary.
      
      Fixes: 92a0f81d ("x86/cpu_entry_area: Move it out of the fixmap")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Gilbert <benjamin.gilbert@coreos.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: stable <stable@vger.kernel.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Garnier <thgarnie@google.com>,
      Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
      Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801041320360.1771@nanos
      f2078904
    • A
      x86/mm: Set MODULES_END to 0xffffffffff000000 · f5a40711
      Andrey Ryabinin 提交于
      Since f06bdd40 ("x86/mm: Adapt MODULES_END based on fixmap section size")
      kasan_mem_to_shadow(MODULES_END) could be not aligned to a page boundary.
      
      So passing page unaligned address to kasan_populate_zero_shadow() have two
      possible effects:
      
      1) It may leave one page hole in supposed to be populated area. After commit
        21506525 ("x86/kasan/64: Teach KASAN about the cpu_entry_area") that
        hole happens to be in the shadow covering fixmap area and leads to crash:
      
       BUG: unable to handle kernel paging request at fffffbffffe8ee04
       RIP: 0010:check_memory_region+0x5c/0x190
      
       Call Trace:
        <NMI>
        memcpy+0x1f/0x50
        ghes_copy_tofrom_phys+0xab/0x180
        ghes_read_estatus+0xfb/0x280
        ghes_notify_nmi+0x2b2/0x410
        nmi_handle+0x115/0x2c0
        default_do_nmi+0x57/0x110
        do_nmi+0xf8/0x150
        end_repeat_nmi+0x1a/0x1e
      
      Note, the crash likely disappeared after commit 92a0f81d, which
      changed kasan_populate_zero_shadow() call the way it was before
      commit 21506525.
      
      2) Attempt to load module near MODULES_END will fail, because
         __vmalloc_node_range() called from kasan_module_alloc() will hit the
         WARN_ON(!pte_none(*pte)) in the vmap_pte_range() and bail out with error.
      
      To fix this we need to make kasan_mem_to_shadow(MODULES_END) page aligned
      which means that MODULES_END should be 8*PAGE_SIZE aligned.
      
      The whole point of commit f06bdd40 was to move MODULES_END down if
      NR_CPUS is big, so the cpu_entry_area takes a lot of space.
      But since 92a0f81d ("x86/cpu_entry_area: Move it out of the fixmap")
      the cpu_entry_area is no longer in fixmap, so we could just set
      MODULES_END to a fixed 8*PAGE_SIZE aligned address.
      
      Fixes: f06bdd40 ("x86/mm: Adapt MODULES_END based on fixmap section size")
      Reported-by: NJakub Kicinski <kubakici@wp.pl>
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Link: https://lkml.kernel.org/r/20171228160620.23818-1-aryabinin@virtuozzo.com
      f5a40711
  11. 04 1月, 2018 2 次提交