You need to sign in or sign up before continuing.
  1. 27 12月, 2019 5 次提交
  2. 27 8月, 2018 1 次提交
    • A
      x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+ · cc51e542
      Andi Kleen 提交于
      On Nehalem and newer core CPUs the CPU cache internally uses 44 bits
      physical address space. The L1TF workaround is limited by this internal
      cache address width, and needs to have one bit free there for the
      mitigation to work.
      
      Older client systems report only 36bit physical address space so the range
      check decides that L1TF is not mitigated for a 36bit phys/32GB system with
      some memory holes.
      
      But since these actually have the larger internal cache width this warning
      is bogus because it would only really be needed if the system had more than
      43bits of memory.
      
      Add a new internal x86_cache_bits field. Normally it is the same as the
      physical bits field reported by CPUID, but for Nehalem and newerforce it to
      be at least 44bits.
      
      Change the L1TF memory size warning to use the new cache_bits field to
      avoid bogus warnings and remove the bogus comment about memory size.
      
      Fixes: 17dbca11 ("x86/speculation/l1tf: Add sysfs reporting for l1tf")
      Reported-by: NGeorge Anchev <studio@anchev.net>
      Reported-by: NChristopher Snowhill <kode54@gmail.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: x86@kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: Michael Hocko <mhocko@suse.com>
      Cc: vbabka@suse.cz
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180824170351.34874-1-andi@firstfloor.org
      cc51e542
  3. 07 8月, 2018 1 次提交
    • M
      xen/pv: Call get_cpu_address_sizes to set x86_virt/phys_bits · 405c018a
      M. Vefa Bicakci 提交于
      Commit d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits
      adjustment corruption") has moved the query and calculation of the
      x86_virt_bits and x86_phys_bits fields of the cpuinfo_x86 struct
      from the get_cpu_cap function to a new function named
      get_cpu_address_sizes.
      
      One of the call sites related to Xen PV VMs was unfortunately missed
      in the aforementioned commit. This prevents successful boot-up of
      kernel versions 4.17 and up in Xen PV VMs if CONFIG_DEBUG_VIRTUAL
      is enabled, due to the following code path:
      
        enlighten_pv.c::xen_start_kernel
          mmu_pv.c::xen_reserve_special_pages
            page.h::__pa
              physaddr.c::__phys_addr
                physaddr.h::phys_addr_valid
      
      phys_addr_valid uses boot_cpu_data.x86_phys_bits to validate physical
      addresses. boot_cpu_data.x86_phys_bits is no longer populated before
      the call to xen_reserve_special_pages due to the aforementioned commit
      though, so the validation performed by phys_addr_valid fails, which
      causes __phys_addr to trigger a BUG, preventing boot-up.
      Signed-off-by: NM. Vefa Bicakci <m.v.b@runbox.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: xen-devel@lists.xenproject.org
      Cc: x86@kernel.org
      Cc: stable@vger.kernel.org # for v4.17 and up
      Fixes: d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits adjustment corruption")
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      405c018a
  4. 03 8月, 2018 1 次提交
    • S
      x86/speculation: Support Enhanced IBRS on future CPUs · 706d5168
      Sai Praneeth 提交于
      Future Intel processors will support "Enhanced IBRS" which is an "always
      on" mode i.e. IBRS bit in SPEC_CTRL MSR is enabled once and never
      disabled.
      
      From the specification [1]:
      
       "With enhanced IBRS, the predicted targets of indirect branches
        executed cannot be controlled by software that was executed in a less
        privileged predictor mode or on another logical processor. As a
        result, software operating on a processor with enhanced IBRS need not
        use WRMSR to set IA32_SPEC_CTRL.IBRS after every transition to a more
        privileged predictor mode. Software can isolate predictor modes
        effectively simply by setting the bit once. Software need not disable
        enhanced IBRS prior to entering a sleep state such as MWAIT or HLT."
      
      If Enhanced IBRS is supported by the processor then use it as the
      preferred spectre v2 mitigation mechanism instead of Retpoline. Intel's
      Retpoline white paper [2] states:
      
       "Retpoline is known to be an effective branch target injection (Spectre
        variant 2) mitigation on Intel processors belonging to family 6
        (enumerated by the CPUID instruction) that do not have support for
        enhanced IBRS. On processors that support enhanced IBRS, it should be
        used for mitigation instead of retpoline."
      
      The reason why Enhanced IBRS is the recommended mitigation on processors
      which support it is that these processors also support CET which
      provides a defense against ROP attacks. Retpoline is very similar to ROP
      techniques and might trigger false positives in the CET defense.
      
      If Enhanced IBRS is selected as the mitigation technique for spectre v2,
      the IBRS bit in SPEC_CTRL MSR is set once at boot time and never
      cleared. Kernel also has to make sure that IBRS bit remains set after
      VMEXIT because the guest might have cleared the bit. This is already
      covered by the existing x86_spec_ctrl_set_guest() and
      x86_spec_ctrl_restore_host() speculation control functions.
      
      Enhanced IBRS still requires IBPB for full mitigation.
      
      [1] Speculative-Execution-Side-Channel-Mitigations.pdf
      [2] Retpoline-A-Branch-Target-Injection-Mitigation.pdf
      Both documents are available at:
      https://bugzilla.kernel.org/show_bug.cgi?id=199511Originally-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NSai Praneeth Prakhya <sai.praneeth.prakhya@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Tim C Chen <tim.c.chen@intel.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Ravi Shankar <ravi.v.shankar@intel.com>
      Link: https://lkml.kernel.org/r/1533148945-24095-1-git-send-email-sai.praneeth.prakhya@intel.com
      706d5168
  5. 20 7月, 2018 3 次提交
    • J
      x86/entry/32: Enter the kernel via trampoline stack · 45d7b255
      Joerg Roedel 提交于
      Use the entry-stack as a trampoline to enter the kernel. The entry-stack is
      already in the cpu_entry_area and will be mapped to userspace when PTI is
      enabled.
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NPavel Machek <pavel@ucw.cz>
      Cc: "H . Peter Anvin" <hpa@zytor.com>
      Cc: linux-mm@kvack.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Laight <David.Laight@aculab.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Eduardo Valentin <eduval@amazon.com>
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: aliguori@amazon.com
      Cc: daniel.gruss@iaik.tugraz.at
      Cc: hughd@google.com
      Cc: keescook@google.com
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Waiman Long <llong@redhat.com>
      Cc: "David H . Gutteridge" <dhgutteridge@sympatico.ca>
      Cc: joro@8bytes.org
      Link: https://lkml.kernel.org/r/1531906876-13451-8-git-send-email-joro@8bytes.org
      45d7b255
    • B
      x86/CPU: Call detect_nopl() only on the BSP · 9b3661cd
      Borislav Petkov 提交于
      Make it use the setup_* variants and have it be called only on the BSP and
      drop the call in generic_identify() - X86_FEATURE_NOPL will be replicated
      to the APs through the forced caps. Helps to keep the mess at a manageable
      level.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: peterz@infradead.org
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-11-pasha.tatashin@oracle.com
      9b3661cd
    • P
      x86/jump_label: Initialize static branching early · 8990cac6
      Pavel Tatashin 提交于
      Static branching is useful to runtime patch branches that are used in hot
      path, but are infrequently changed.
      
      The x86 clock framework is one example that uses static branches to setup
      the best clock during boot and never changes it again.
      
      It is desired to enable the TSC based sched clock early to allow fine
      grained boot time analysis early on. That requires the static branching
      functionality to be functional early as well.
      
      Static branching requires patching nop instructions, thus,
      arch_init_ideal_nops() must be called prior to jump_label_init().
      
      Do all the necessary steps to call arch_init_ideal_nops() right after
      early_cpu_init(), which also allows to insert a call to jump_label_init()
      right after that. jump_label_init() will be called again from the generic
      init code, but the code is protected against reinitialization already.
      
      [ tglx: Massaged changelog ]
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: steven.sistare@oracle.com
      Cc: daniel.m.jordan@oracle.com
      Cc: linux@armlinux.org.uk
      Cc: schwidefsky@de.ibm.com
      Cc: heiko.carstens@de.ibm.com
      Cc: john.stultz@linaro.org
      Cc: sboyd@codeaurora.org
      Cc: hpa@zytor.com
      Cc: douly.fnst@cn.fujitsu.com
      Cc: prarit@redhat.com
      Cc: feng.tang@intel.com
      Cc: pmladek@suse.com
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: linux-s390@vger.kernel.org
      Cc: boris.ostrovsky@oracle.com
      Cc: jgross@suse.com
      Cc: pbonzini@redhat.com
      Link: https://lkml.kernel.org/r/20180719205545.16512-10-pasha.tatashin@oracle.com
      8990cac6
  6. 23 6月, 2018 1 次提交
  7. 21 6月, 2018 3 次提交
  8. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  9. 06 6月, 2018 2 次提交
  10. 23 5月, 2018 1 次提交
  11. 19 5月, 2018 1 次提交
  12. 18 5月, 2018 1 次提交
  13. 17 5月, 2018 4 次提交
  14. 13 5月, 2018 2 次提交
  15. 10 5月, 2018 1 次提交
    • K
      x86/bugs: Rename _RDS to _SSBD · 9f65fb29
      Konrad Rzeszutek Wilk 提交于
      Intel collateral will reference the SSB mitigation bit in IA32_SPEC_CTL[2]
      as SSBD (Speculative Store Bypass Disable).
      
      Hence changing it.
      
      It is unclear yet what the MSR_IA32_ARCH_CAPABILITIES (0x10a) Bit(4) name
      is going to be. Following the rename it would be SSBD_NO but that rolls out
      to Speculative Store Bypass Disable No.
      
      Also fixed the missing space in X86_FEATURE_AMD_SSBD.
      
      [ tglx: Fixup x86_amd_rds_enable() and rds_tif_to_amd_ls_cfg() as well ]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      9f65fb29
  16. 06 5月, 2018 1 次提交
  17. 03 5月, 2018 4 次提交
  18. 02 5月, 2018 1 次提交
    • T
      x86/cpu: Restore CPUID_8000_0008_EBX reload · c65732e4
      Thomas Gleixner 提交于
      The recent commt which addresses the x86_phys_bits corruption with
      encrypted memory on CPUID reload after a microcode update lost the reload
      of CPUID_8000_0008_EBX as well.
      
      As a consequence IBRS and IBRS_FW are not longer detected
      
      Restore the behaviour by bringing the reload of CPUID_8000_0008_EBX
      back. This restore has a twist due to the convoluted way the cpuid analysis
      works:
      
      CPUID_8000_0008_EBX is used by AMD to enumerate IBRB, IBRS, STIBP. On Intel
      EBX is not used. But the speculation control code sets the AMD bits when
      running on Intel depending on the Intel specific speculation control
      bits. This was done to use the same bits for alternatives.
      
      The change which moved the 8000_0008 evaluation out of get_cpu_cap() broke
      this nasty scheme due to ordering. So that on Intel the store to
      CPUID_8000_0008_EBX clears the IBRB, IBRS, STIBP bits which had been set
      before by software.
      
      So the actual CPUID_8000_0008_EBX needs to go back to the place where it
      was and the phys/virt address space calculation cannot touch it.
      
      In hindsight this should have used completely synthetic bits for IBRB,
      IBRS, STIBP instead of reusing the AMD bits, but that's for 4.18.
      
      /me needs to find time to cleanup that steaming pile of ...
      
      Fixes: d94a155c ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits adjustment corruption")
      Reported-by: NJörg Otte <jrg.otte@gmail.com>
      Reported-by: NTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NJörg Otte <jrg.otte@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: kirill.shutemov@linux.intel.com
      Cc: Borislav Petkov <bp@alien8.de
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1805021043510.1668@nanos.tec.linutronix.de
      c65732e4
  19. 10 4月, 2018 1 次提交
  20. 17 3月, 2018 1 次提交
  21. 17 2月, 2018 2 次提交
  22. 15 2月, 2018 2 次提交