1. 04 8月, 2022 3 次提交
  2. 17 7月, 2022 1 次提交
  3. 06 12月, 2021 1 次提交
  4. 13 4月, 2021 1 次提交
  5. 09 9月, 2020 3 次提交
  6. 04 9月, 2020 1 次提交
  7. 27 7月, 2020 1 次提交
  8. 25 6月, 2020 1 次提交
  9. 18 6月, 2020 2 次提交
  10. 11 6月, 2020 1 次提交
  11. 07 5月, 2020 1 次提交
  12. 22 4月, 2020 1 次提交
  13. 27 3月, 2020 1 次提交
  14. 21 3月, 2020 1 次提交
  15. 24 1月, 2020 1 次提交
    • D
      x86/mpx: remove MPX from arch/x86 · 45fc24e8
      Dave Hansen 提交于
      From: Dave Hansen <dave.hansen@linux.intel.com>
      
      MPX is being removed from the kernel due to a lack of support
      in the toolchain going forward (gcc).
      
      This removes all the remaining (dead at this point) MPX handling
      code remaining in the tree.  The only remaining code is the XSAVE
      support for MPX state which is currently needd for KVM to handle
      VMs which might use MPX.
      
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: x86@kernel.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      45fc24e8
  16. 14 1月, 2020 2 次提交
    • S
      x86/cpu: Detect VMX features on Intel, Centaur and Zhaoxin CPUs · b47ce1fe
      Sean Christopherson 提交于
      Add an entry in struct cpuinfo_x86 to track VMX capabilities and fill
      the capabilities during IA32_FEAT_CTL MSR initialization.
      
      Make the VMX capabilities dependent on IA32_FEAT_CTL and
      X86_FEATURE_NAMES so as to avoid unnecessary overhead on CPUs that can't
      possibly support VMX, or when /proc/cpuinfo is not available.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: https://lkml.kernel.org/r/20191221044513.21680-11-sean.j.christopherson@intel.com
      b47ce1fe
    • S
      x86/vmx: Introduce VMX_FEATURES_* · 15934878
      Sean Christopherson 提交于
      Add a VMX-specific variant of X86_FEATURE_* flags, which will eventually
      supplant the synthetic VMX flags defined in cpufeatures word 8.  Use the
      Intel-defined layouts for the major VMX execution controls so that their
      word entries can be directly populated from their respective MSRs, and
      so that the VMX_FEATURE_* flags can be used to define the existing bit
      definitions in asm/vmx.h, i.e. force developers to define a VMX_FEATURE
      flag when adding support for a new hardware feature.
      
      The majority of Intel's (and compatible CPU's) VMX capabilities are
      enumerated via MSRs and not CPUID, i.e. querying /proc/cpuinfo doesn't
      naturally provide any insight into the virtualization capabilities of
      VMX enabled CPUs.  Commit
      
        e38e05a8 ("x86: extended "flags" to show virtualization HW feature
      		 in /proc/cpuinfo")
      
      attempted to address the issue by synthesizing select VMX features into
      a Linux-defined word in cpufeatures.
      
      Lack of reporting of VMX capabilities via /proc/cpuinfo is problematic
      because there is no sane way for a user to query the capabilities of
      their platform, e.g. when trying to find a platform to test a feature or
      debug an issue that has a hardware dependency.  Lack of reporting is
      especially problematic when the user isn't familiar with VMX, e.g. the
      format of the MSRs is non-standard, existence of some MSRs is reported
      by bits in other MSRs, several "features" from KVM's point of view are
      enumerated as 3+ distinct features by hardware, etc...
      
      The synthetic cpufeatures approach has several flaws:
      
        - The set of synthesized VMX flags has become extremely stale with
          respect to the full set of VMX features, e.g. only one new flag
          (EPT A/D) has been added in the the decade since the introduction of
          the synthetic VMX features.  Failure to keep the VMX flags up to
          date is likely due to the lack of a mechanism that forces developers
          to consider whether or not a new feature is worth reporting.
      
        - The synthetic flags may incorrectly be misinterpreted as affecting
          kernel behavior, i.e. KVM, the kernel's sole consumer of VMX,
          completely ignores the synthetic flags.
      
        - New CPU vendors that support VMX have duplicated the hideous code
          that propagates VMX features from MSRs to cpufeatures.  Bringing the
          synthetic VMX flags up to date would exacerbate the copy+paste
          trainwreck.
      
      Define separate VMX_FEATURE flags to set the stage for enumerating VMX
      capabilities outside of the cpu_has() framework, and for adding
      functional usage of VMX_FEATURE_* to help ensure the features reported
      via /proc/cpuinfo is up to date with respect to kernel recognition of
      VMX capabilities.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Link: https://lkml.kernel.org/r/20191221044513.21680-10-sean.j.christopherson@intel.com
      15934878
  17. 14 12月, 2019 1 次提交
  18. 27 11月, 2019 3 次提交
    • A
      x86/doublefault/32: Move #DF stack and TSS to cpu_entry_area · dc4e0021
      Andy Lutomirski 提交于
      There are three problems with the current layout of the doublefault
      stack and TSS.  First, the TSS is only cacheline-aligned, which is
      not enough -- if the hardware portion of the TSS (struct x86_hw_tss)
      crosses a page boundary, horrible things happen [0].  Second, the
      stack and TSS are global, so simultaneous double faults on different
      CPUs will cause massive corruption.  Third, the whole mechanism
      won't work if user CR3 is loaded, resulting in a triple fault [1].
      
      Let the doublefault stack and TSS share a page (which prevents the
      TSS from spanning a page boundary), make it percpu, and move it into
      cpu_entry_area.  Teach the stack dump code about the doublefault
      stack.
      
      [0] Real hardware will read past the end of the page onto the next
          *physical* page if a task switch happens.  Virtual machines may
          have any number of bugs, and I would consider it reasonable for
          a VM to summarily kill the guest if it tries to task-switch to
          a page-spanning TSS.
      
      [1] Real hardware triple faults.  At least some VMs seem to hang.
          I'm not sure what's going on.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      dc4e0021
    • A
      x86/traps: Disentangle the 32-bit and 64-bit doublefault code · 93efbde2
      Andy Lutomirski 提交于
      The 64-bit doublefault handler is much nicer than the 32-bit one.
      As a first step toward unifying them, make the 64-bit handler
      self-contained.  This should have no effect no functional effect
      except in the odd case of x86_64 with CONFIG_DOUBLEFAULT=n in which
      case it will change the logging a bit.
      
      This also gets rid of CONFIG_DOUBLEFAULT configurability on 64-bit
      kernels.  It didn't do anything useful -- CONFIG_DOUBLEFAULT=n
      didn't actually disable doublefault handling on x86_64.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      93efbde2
    • I
      x86/iopl: Make 'struct tss_struct' constant size again · 0bcd7762
      Ingo Molnar 提交于
      After the following commit:
      
        05b042a1: ("x86/pti/32: Calculate the various PTI cpu_entry_area sizes correctly, make the CPU_ENTRY_AREA_PAGES assert precise")
      
      'struct cpu_entry_area' has to be Kconfig invariant, so that we always
      have a matching CPU_ENTRY_AREA_PAGES size.
      
      This commit added a CONFIG_X86_IOPL_IOPERM dependency to tss_struct:
      
        111e7b15: ("x86/ioperm: Extend IOPL config to control ioperm() as well")
      
      Which, if CONFIG_X86_IOPL_IOPERM is turned off, reduces the size of
      cpu_entry_area by two pages, triggering the assert:
      
        ./include/linux/compiler.h:391:38: error: call to ‘__compiletime_assert_202’ declared with attribute error: BUILD_BUG_ON failed: (CPU_ENTRY_AREA_PAGES+1)*PAGE_SIZE != CPU_ENTRY_AREA_MAP_SIZE
      
      Simplify the Kconfig dependencies and make cpu_entry_area constant
      size on 32-bit kernels again.
      
      Fixes: 05b042a1: ("x86/pti/32: Calculate the various PTI cpu_entry_area sizes correctly, make the CPU_ENTRY_AREA_PAGES assert precise")
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      0bcd7762
  19. 16 11月, 2019 8 次提交
  20. 05 11月, 2019 1 次提交
    • K
      x86/mm: Report which part of kernel image is freed · 5494c3a6
      Kees Cook 提交于
      The memory freeing report wasn't very useful for figuring out which
      parts of the kernel image were being freed. Add the details for clearer
      reporting in dmesg.
      
      Before:
      
        Freeing unused kernel image memory: 1348K
        Write protecting the kernel read-only data: 20480k
        Freeing unused kernel image memory: 2040K
        Freeing unused kernel image memory: 172K
      
      After:
      
        Freeing unused kernel image (initmem) memory: 1348K
        Write protecting the kernel read-only data: 20480k
        Freeing unused kernel image (text/rodata gap) memory: 2040K
        Freeing unused kernel image (rodata/data gap) memory: 172K
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: linux-s390@vger.kernel.org
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Segher Boessenkool <segher@kernel.crashing.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will@kernel.org>
      Cc: x86-ml <x86@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: https://lkml.kernel.org/r/20191029211351.13243-28-keescook@chromium.org
      5494c3a6
  21. 28 10月, 2019 1 次提交
    • P
      x86/speculation/taa: Add mitigation for TSX Async Abort · 1b42f017
      Pawan Gupta 提交于
      TSX Async Abort (TAA) is a side channel vulnerability to the internal
      buffers in some Intel processors similar to Microachitectural Data
      Sampling (MDS). In this case, certain loads may speculatively pass
      invalid data to dependent operations when an asynchronous abort
      condition is pending in a TSX transaction.
      
      This includes loads with no fault or assist condition. Such loads may
      speculatively expose stale data from the uarch data structures as in
      MDS. Scope of exposure is within the same-thread and cross-thread. This
      issue affects all current processors that support TSX, but do not have
      ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES.
      
      On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0,
      CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers
      using VERW or L1D_FLUSH, there is no additional mitigation needed for
      TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by
      disabling the Transactional Synchronization Extensions (TSX) feature.
      
      A new MSR IA32_TSX_CTRL in future and current processors after a
      microcode update can be used to control the TSX feature. There are two
      bits in that MSR:
      
      * TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted
      Transactional Memory (RTM).
      
      * TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other
      TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally
      disabled with updated microcode but still enumerated as present by
      CPUID(EAX=7).EBX{bit4}.
      
      The second mitigation approach is similar to MDS which is clearing the
      affected CPU buffers on return to user space and when entering a guest.
      Relevant microcode update is required for the mitigation to work.  More
      details on this approach can be found here:
      
        https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
      
      The TSX feature can be controlled by the "tsx" command line parameter.
      If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is
      deployed. The effective mitigation state can be read from sysfs.
      
       [ bp:
         - massage + comments cleanup
         - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh.
         - remove partial TAA mitigation in update_mds_branch_idle() - Josh.
         - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g
       ]
      Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      1b42f017
  22. 11 7月, 2019 1 次提交
  23. 22 6月, 2019 1 次提交
  24. 23 5月, 2019 2 次提交