1. 23 6月, 2021 7 次提交
  2. 22 6月, 2021 2 次提交
    • T
      x86/fpu: Make init_fpstate correct with optimized XSAVE · f9dfb5e3
      Thomas Gleixner 提交于
      The XSAVE init code initializes all enabled and supported components with
      XRSTOR(S) to init state. Then it XSAVEs the state of the components back
      into init_fpstate which is used in several places to fill in the init state
      of components.
      
      This works correctly with XSAVE, but not with XSAVEOPT and XSAVES because
      those use the init optimization and skip writing state of components which
      are in init state. So init_fpstate.xsave still contains all zeroes after
      this operation.
      
      There are two ways to solve that:
      
         1) Use XSAVE unconditionally, but that requires to reshuffle the buffer when
            XSAVES is enabled because XSAVES uses compacted format.
      
         2) Save the components which are known to have a non-zero init state by other
            means.
      
      Looking deeper, #2 is the right thing to do because all components the
      kernel supports have all-zeroes init state except the legacy features (FP,
      SSE). Those cannot be hard coded because the states are not identical on all
      CPUs, but they can be saved with FXSAVE which avoids all conditionals.
      
      Use FXSAVE to save the legacy FP/SSE components in init_fpstate along with
      a BUILD_BUG_ON() which reminds developers to validate that a newly added
      component has all zeroes init state. As a bonus remove the now unused
      copy_xregs_to_kernel_booting() crutch.
      
      The XSAVE and reshuffle method can still be implemented in the unlikely
      case that components are added which have a non-zero init state and no
      other means to save them. For now, FXSAVE is just simple and good enough.
      
        [ bp: Fix a typo or two in the text. ]
      
      Fixes: 6bad06b7 ("x86, xsave: Use xsaveopt in context-switch path when supported")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210618143444.587311343@linutronix.de
      f9dfb5e3
    • T
      x86/fpu: Preserve supervisor states in sanitize_restored_user_xstate() · 9301982c
      Thomas Gleixner 提交于
      sanitize_restored_user_xstate() preserves the supervisor states only
      when the fx_only argument is zero, which allows unprivileged user space
      to put supervisor states back into init state.
      
      Preserve them unconditionally.
      
       [ bp: Fix a typo or two in the text. ]
      
      Fixes: 5d6b6a6f ("x86/fpu/xstate: Update sanitize_restored_xstate() for supervisor xstates")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210618143444.438635017@linutronix.de
      9301982c
  3. 19 6月, 2021 1 次提交
    • F
      x86/mm: Avoid truncating memblocks for SGX memory · 28e5e44a
      Fan Du 提交于
      tl;dr:
      
      Several SGX users reported seeing the following message on NUMA systems:
      
        sgx: [Firmware Bug]: Unable to map EPC section to online node. Fallback to the NUMA node 0.
      
      This turned out to be the memblock code mistakenly throwing away SGX
      memory.
      
      === Full Changelog ===
      
      The 'max_pfn' variable represents the highest known RAM address.  It can
      be used, for instance, to quickly determine for which physical addresses
      there is mem_map[] space allocated.  The numa_meminfo code makes an
      effort to throw out ("trim") all memory blocks which are above 'max_pfn'.
      
      SGX memory is not considered RAM (it is marked as "Reserved" in the
      e820) and is not taken into account by max_pfn. Despite this, SGX memory
      areas have NUMA affinity and are enumerated in the ACPI SRAT table. The
      existing SGX code uses the numa_meminfo mechanism to look up the NUMA
      affinity for its memory areas.
      
      In cases where SGX memory was above max_pfn (usually just the one EPC
      section in the last highest NUMA node), the numa_memblock is truncated
      at 'max_pfn', which is below the SGX memory.  When the SGX code tries to
      look up the affinity of this memory, it fails and produces an error message:
      
        sgx: [Firmware Bug]: Unable to map EPC section to online node. Fallback to the NUMA node 0.
      
      and assigns the memory to NUMA node 0.
      
      Instead of silently truncating the memory block at 'max_pfn' and
      dropping the SGX memory, add the truncated portion to
      'numa_reserved_meminfo'.  This allows the SGX code to later determine
      the NUMA affinity of its 'Reserved' area.
      
      Before, numa_meminfo looked like this (from 'crash'):
      
        blk = { start =          0x0, end = 0x2080000000, nid = 0x0 }
              { start = 0x2080000000, end = 0x4000000000, nid = 0x1 }
      
      numa_reserved_meminfo is empty.
      
      With this, numa_meminfo looks like this:
      
        blk = { start =          0x0, end = 0x2080000000, nid = 0x0 }
              { start = 0x2080000000, end = 0x4000000000, nid = 0x1 }
      
      and numa_reserved_meminfo has an entry for node 1's SGX memory:
      
        blk =  { start = 0x4000000000, end = 0x4080000000, nid = 0x1 }
      
       [ daveh: completely rewrote/reworked changelog ]
      
      Fixes: 5d30f92e ("x86/NUMA: Provide a range-to-target_node lookup facility")
      Reported-by: NReinette Chatre <reinette.chatre@intel.com>
      Signed-off-by: NFan Du <fan.du@intel.com>
      Signed-off-by: NDave Hansen <dave.hansen@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NJarkko Sakkinen <jarkko@kernel.org>
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Reviewed-by: NDave Hansen <dave.hansen@intel.com>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/20210617194657.0A99CB22@viggo.jf.intel.com
      28e5e44a
  4. 18 6月, 2021 1 次提交
    • M
      PCI: Add AMD RS690 quirk to enable 64-bit DMA · cacf994a
      Mikel Rychliski 提交于
      Although the AMD RS690 chipset has 64-bit DMA support, BIOS implementations
      sometimes fail to configure the memory limit registers correctly.
      
      The Acer F690GVM mainboard uses this chipset and a Marvell 88E8056 NIC. The
      sky2 driver programs the NIC to use 64-bit DMA, which will not work:
      
        sky2 0000:02:00.0: error interrupt status=0x8
        sky2 0000:02:00.0 eth0: tx timeout
        sky2 0000:02:00.0 eth0: transmit ring 0 .. 22 report=0 done=0
      
      Other drivers required by this mainboard either don't support 64-bit DMA,
      or have it disabled using driver specific quirks. For example, the ahci
      driver has quirks to enable or disable 64-bit DMA depending on the BIOS
      version (see ahci_sb600_enable_64bit() in ahci.c). This ahci quirk matches
      against the SB600 SATA controller, but the real issue is almost certainly
      with the RS690 PCI host that it was commonly attached to.
      
      To avoid this issue in all drivers with 64-bit DMA support, fix the
      configuration of the PCI host. If the kernel is aware of physical memory
      above 4GB, but the BIOS never configured the PCI host with this
      information, update the registers with our values.
      
      [bhelgaas: drop PCI_DEVICE_ID_ATI_RS690 definition]
      Link: https://lore.kernel.org/r/20210611214823.4898-1-mikel@mikelr.comSigned-off-by: NMikel Rychliski <mikel@mikelr.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      cacf994a
  5. 16 6月, 2021 1 次提交
  6. 12 6月, 2021 1 次提交
  7. 11 6月, 2021 3 次提交
    • S
      KVM: x86/mmu: Calculate and check "full" mmu_role for nested MMU · 654430ef
      Sean Christopherson 提交于
      Calculate and check the full mmu_role when initializing the MMU context
      for the nested MMU, where "full" means the bits and pieces of the role
      that aren't handled by kvm_calc_mmu_role_common().  While the nested MMU
      isn't used for shadow paging, things like the number of levels in the
      guest's page tables are surprisingly important when walking the guest
      page tables.  Failure to reinitialize the nested MMU context if L2's
      paging mode changes can result in unexpected and/or missed page faults,
      and likely other explosions.
      
      E.g. if an L1 vCPU is running both a 32-bit PAE L2 and a 64-bit L2, the
      "common" role calculation will yield the same role for both L2s.  If the
      64-bit L2 is run after the 32-bit PAE L2, L0 will fail to reinitialize
      the nested MMU context, ultimately resulting in a bad walk of L2's page
      tables as the MMU will still have a guest root_level of PT32E_ROOT_LEVEL.
      
        WARNING: CPU: 4 PID: 167334 at arch/x86/kvm/vmx/vmx.c:3075 ept_save_pdptrs+0x15/0xe0 [kvm_intel]
        Modules linked in: kvm_intel]
        CPU: 4 PID: 167334 Comm: CPU 3/KVM Not tainted 5.13.0-rc1-d849817d5673-reqs #185
        Hardware name: ASUS Q87M-E/Q87M-E, BIOS 1102 03/03/2014
        RIP: 0010:ept_save_pdptrs+0x15/0xe0 [kvm_intel]
        Code: <0f> 0b c3 f6 87 d8 02 00f
        RSP: 0018:ffffbba702dbba00 EFLAGS: 00010202
        RAX: 0000000000000011 RBX: 0000000000000002 RCX: ffffffff810a2c08
        RDX: ffff91d7bc30acc0 RSI: 0000000000000011 RDI: ffff91d7bc30a600
        RBP: ffff91d7bc30a600 R08: 0000000000000010 R09: 0000000000000007
        R10: 0000000000000000 R11: 0000000000000000 R12: ffff91d7bc30a600
        R13: ffff91d7bc30acc0 R14: ffff91d67c123460 R15: 0000000115d7e005
        FS:  00007fe8e9ffb700(0000) GS:ffff91d90fb00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 0000000000000000 CR3: 000000029f15a001 CR4: 00000000001726e0
        Call Trace:
         kvm_pdptr_read+0x3a/0x40 [kvm]
         paging64_walk_addr_generic+0x327/0x6a0 [kvm]
         paging64_gva_to_gpa_nested+0x3f/0xb0 [kvm]
         kvm_fetch_guest_virt+0x4c/0xb0 [kvm]
         __do_insn_fetch_bytes+0x11a/0x1f0 [kvm]
         x86_decode_insn+0x787/0x1490 [kvm]
         x86_decode_emulated_instruction+0x58/0x1e0 [kvm]
         x86_emulate_instruction+0x122/0x4f0 [kvm]
         vmx_handle_exit+0x120/0x660 [kvm_intel]
         kvm_arch_vcpu_ioctl_run+0xe25/0x1cb0 [kvm]
         kvm_vcpu_ioctl+0x211/0x5a0 [kvm]
         __x64_sys_ioctl+0x83/0xb0
         do_syscall_64+0x40/0xb0
         entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: stable@vger.kernel.org
      Fixes: bf627a92 ("x86/kvm/mmu: check if MMU reconfiguration is needed in init_kvm_nested_mmu()")
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210610220026.1364486-1-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      654430ef
    • W
      KVM: X86: Fix x86_emulator slab cache leak · dfdc0a71
      Wanpeng Li 提交于
      Commit c9b8b07c (KVM: x86: Dynamically allocate per-vCPU emulation context)
      tries to allocate per-vCPU emulation context dynamically, however, the
      x86_emulator slab cache is still exiting after the kvm module is unload
      as below after destroying the VM and unloading the kvm module.
      
      grep x86_emulator /proc/slabinfo
      x86_emulator          36     36   2672   12    8 : tunables    0    0    0 : slabdata      3      3      0
      
      This patch fixes this slab cache leak by destroying the x86_emulator slab cache
      when the kvm module is unloaded.
      
      Fixes: c9b8b07c (KVM: x86: Dynamically allocate per-vCPU emulation context)
      Cc: stable@vger.kernel.org
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Message-Id: <1623387573-5969-1-git-send-email-wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      dfdc0a71
    • A
      KVM: SVM: Call SEV Guest Decommission if ASID binding fails · 934002cd
      Alper Gun 提交于
      Send SEV_CMD_DECOMMISSION command to PSP firmware if ASID binding
      fails. If a failure happens after  a successful LAUNCH_START command,
      a decommission command should be executed. Otherwise, guest context
      will be unfreed inside the AMD SP. After the firmware will not have
      memory to allocate more SEV guest context, LAUNCH_START command will
      begin to fail with SEV_RET_RESOURCE_LIMIT error.
      
      The existing code calls decommission inside sev_unbind_asid, but it is
      not called if a failure happens before guest activation succeeds. If
      sev_bind_asid fails, decommission is never called. PSP firmware has a
      limit for the number of guests. If sev_asid_binding fails many times,
      PSP firmware will not have resources to create another guest context.
      
      Cc: stable@vger.kernel.org
      Fixes: 59414c98 ("KVM: SVM: Add support for KVM_SEV_LAUNCH_START command")
      Reported-by: NPeter Gonda <pgonda@google.com>
      Signed-off-by: NAlper Gun <alpergun@google.com>
      Reviewed-by: NMarc Orr <marcorr@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20210610174604.2554090-1-alpergun@google.com>
      934002cd
  8. 10 6月, 2021 6 次提交
    • S
      KVM: x86: Immediately reset the MMU context when the SMM flag is cleared · 78fcb2c9
      Sean Christopherson 提交于
      Immediately reset the MMU context when the vCPU's SMM flag is cleared so
      that the SMM flag in the MMU role is always synchronized with the vCPU's
      flag.  If RSM fails (which isn't correctly emulated), KVM will bail
      without calling post_leave_smm() and leave the MMU in a bad state.
      
      The bad MMU role can lead to a NULL pointer dereference when grabbing a
      shadow page's rmap for a page fault as the initial lookups for the gfn
      will happen with the vCPU's SMM flag (=0), whereas the rmap lookup will
      use the shadow page's SMM flag, which comes from the MMU (=1).  SMM has
      an entirely different set of memslots, and so the initial lookup can find
      a memslot (SMM=0) and then explode on the rmap memslot lookup (SMM=1).
      
        general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN
        KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
        CPU: 1 PID: 8410 Comm: syz-executor382 Not tainted 5.13.0-rc5-syzkaller #0
        Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
        RIP: 0010:__gfn_to_rmap arch/x86/kvm/mmu/mmu.c:935 [inline]
        RIP: 0010:gfn_to_rmap+0x2b0/0x4d0 arch/x86/kvm/mmu/mmu.c:947
        Code: <42> 80 3c 20 00 74 08 4c 89 ff e8 f1 79 a9 00 4c 89 fb 4d 8b 37 44
        RSP: 0018:ffffc90000ffef98 EFLAGS: 00010246
        RAX: 0000000000000000 RBX: ffff888015b9f414 RCX: ffff888019669c40
        RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000001
        RBP: 0000000000000001 R08: ffffffff811d9cdb R09: ffffed10065a6002
        R10: ffffed10065a6002 R11: 0000000000000000 R12: dffffc0000000000
        R13: 0000000000000003 R14: 0000000000000001 R15: 0000000000000000
        FS:  000000000124b300(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 0000000000000000 CR3: 0000000028e31000 CR4: 00000000001526e0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
        Call Trace:
         rmap_add arch/x86/kvm/mmu/mmu.c:965 [inline]
         mmu_set_spte+0x862/0xe60 arch/x86/kvm/mmu/mmu.c:2604
         __direct_map arch/x86/kvm/mmu/mmu.c:2862 [inline]
         direct_page_fault+0x1f74/0x2b70 arch/x86/kvm/mmu/mmu.c:3769
         kvm_mmu_do_page_fault arch/x86/kvm/mmu.h:124 [inline]
         kvm_mmu_page_fault+0x199/0x1440 arch/x86/kvm/mmu/mmu.c:5065
         vmx_handle_exit+0x26/0x160 arch/x86/kvm/vmx/vmx.c:6122
         vcpu_enter_guest+0x3bdd/0x9630 arch/x86/kvm/x86.c:9428
         vcpu_run+0x416/0xc20 arch/x86/kvm/x86.c:9494
         kvm_arch_vcpu_ioctl_run+0x4e8/0xa40 arch/x86/kvm/x86.c:9722
         kvm_vcpu_ioctl+0x70f/0xbb0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3460
         vfs_ioctl fs/ioctl.c:51 [inline]
         __do_sys_ioctl fs/ioctl.c:1069 [inline]
         __se_sys_ioctl+0xfb/0x170 fs/ioctl.c:1055
         do_syscall_64+0x3f/0xb0 arch/x86/entry/common.c:47
         entry_SYSCALL_64_after_hwframe+0x44/0xae
        RIP: 0033:0x440ce9
      
      Cc: stable@vger.kernel.org
      Reported-by: syzbot+fb0b6a7e8713aeb0319c@syzkaller.appspotmail.com
      Fixes: 9ec19493 ("KVM: x86: clear SMM flags before loading state while leaving SMM")
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210609185619.992058-2-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      78fcb2c9
    • G
      KVM: x86: Fix fall-through warnings for Clang · 551912d2
      Gustavo A. R. Silva 提交于
      In preparation to enable -Wimplicit-fallthrough for Clang, fix a couple
      of warnings by explicitly adding break statements instead of just letting
      the code fall through to the next case.
      
      Link: https://github.com/KSPP/linux/issues/115Signed-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
      Message-Id: <20210528200756.GA39320@embeddedor>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      551912d2
    • C
      KVM: SVM: fix doc warnings · 02ffbe63
      ChenXiaoSong 提交于
      Fix kernel-doc warnings:
      
      arch/x86/kvm/svm/avic.c:233: warning: Function parameter or member 'activate' not described in 'avic_update_access_page'
      arch/x86/kvm/svm/avic.c:233: warning: Function parameter or member 'kvm' not described in 'avic_update_access_page'
      arch/x86/kvm/svm/avic.c:781: warning: Function parameter or member 'e' not described in 'get_pi_vcpu_info'
      arch/x86/kvm/svm/avic.c:781: warning: Function parameter or member 'kvm' not described in 'get_pi_vcpu_info'
      arch/x86/kvm/svm/avic.c:781: warning: Function parameter or member 'svm' not described in 'get_pi_vcpu_info'
      arch/x86/kvm/svm/avic.c:781: warning: Function parameter or member 'vcpu_info' not described in 'get_pi_vcpu_info'
      arch/x86/kvm/svm/avic.c:1009: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
      Signed-off-by: NChenXiaoSong <chenxiaosong2@huawei.com>
      Message-Id: <20210609122217.2967131-1-chenxiaosong2@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      02ffbe63
    • C
      x86/nmi_watchdog: Fix old-style NMI watchdog regression on old Intel CPUs · a8383dfb
      CodyYao-oc 提交于
      The following commit:
      
         3a4ac121 ("x86/perf: Add hardware performance events support for Zhaoxin CPU.")
      
      Got the old-style NMI watchdog logic wrong and broke it for basically every
      Intel CPU where it was active. Which is only truly old CPUs, so few people noticed.
      
      On CPUs with perf events support we turn off the old-style NMI watchdog, so it
      was pretty pointless to add the logic for X86_VENDOR_ZHAOXIN to begin with ... :-/
      
      Anyway, the fix is to restore the old logic and add a 'break'.
      
      [ mingo: Wrote a new changelog. ]
      
      Fixes: 3a4ac121 ("x86/perf: Add hardware performance events support for Zhaoxin CPU.")
      Signed-off-by: NCodyYao-oc <CodyYao-oc@zhaoxin.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20210607025335.9643-1-CodyYao-oc@zhaoxin.com
      a8383dfb
    • T
      x86/fpu: Reset state for all signal restore failures · efa16550
      Thomas Gleixner 提交于
      If access_ok() or fpregs_soft_set() fails in __fpu__restore_sig() then the
      function just returns but does not clear the FPU state as it does for all
      other fatal failures.
      
      Clear the FPU state for these failures as well.
      
      Fixes: 72a671ce ("x86, fpu: Unify signal handling code paths for x86 and x86_64 kernels")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/87mtryyhhz.ffs@nanos.tec.linutronix.de
      efa16550
    • J
      kvm: LAPIC: Restore guard to prevent illegal APIC register access · 218bf772
      Jim Mattson 提交于
      Per the SDM, "any access that touches bytes 4 through 15 of an APIC
      register may cause undefined behavior and must not be executed."
      Worse, such an access in kvm_lapic_reg_read can result in a leak of
      kernel stack contents. Prior to commit 01402cf8 ("kvm: LAPIC:
      write down valid APIC registers"), such an access was explicitly
      disallowed. Restore the guard that was removed in that commit.
      
      Fixes: 01402cf8 ("kvm: LAPIC: write down valid APIC registers")
      Signed-off-by: NJim Mattson <jmattson@google.com>
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Message-Id: <20210602205224.3189316-1-jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      218bf772
  9. 09 6月, 2021 11 次提交
    • A
      x86/fpu: Add address range checks to copy_user_to_xstate() · f72a249b
      Andy Lutomirski 提交于
      copy_user_to_xstate() uses __copy_from_user(), which provides a negligible
      speedup.  Fortunately, both call sites are at least almost correct.
      
      __fpu__restore_sig() checks access_ok() with xstate_sigframe_size()
      length and ptrace regset access uses fpu_user_xstate_size. These should
      be valid upper bounds on the length, so, at worst, this would cause
      spurious failures and not accesses to kernel memory.
      
      Nonetheless, this is far more fragile than necessary and none of these
      callers are in a hotpath.
      
      Use copy_from_user() instead.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NRik van Riel <riel@surriel.com>
      Link: https://lkml.kernel.org/r/20210608144346.140254130@linutronix.de
      f72a249b
    • T
      x86/pkru: Write hardware init value to PKRU when xstate is init · 510b80a6
      Thomas Gleixner 提交于
      When user space brings PKRU into init state, then the kernel handling is
      broken:
      
        T1 user space
           xsave(state)
           state.header.xfeatures &= ~XFEATURE_MASK_PKRU;
           xrstor(state)
      
        T1 -> kernel
           schedule()
             XSAVE(S) -> T1->xsave.header.xfeatures[PKRU] == 0
             T1->flags |= TIF_NEED_FPU_LOAD;
      
             wrpkru();
      
           schedule()
             ...
             pk = get_xsave_addr(&T1->fpu->state.xsave, XFEATURE_PKRU);
             if (pk)
      	 wrpkru(pk->pkru);
             else
      	 wrpkru(DEFAULT_PKRU);
      
      Because the xfeatures bit is 0 and therefore the value in the xsave
      storage is not valid, get_xsave_addr() returns NULL and switch_to()
      writes the default PKRU. -> FAIL #1!
      
      So that wrecks any copy_to/from_user() on the way back to user space
      which hits memory which is protected by the default PKRU value.
      
      Assumed that this does not fail (pure luck) then T1 goes back to user
      space and because TIF_NEED_FPU_LOAD is set it ends up in
      
        switch_fpu_return()
            __fpregs_load_activate()
              if (!fpregs_state_valid()) {
        	 load_XSTATE_from_task();
              }
      
      But if nothing touched the FPU between T1 scheduling out and back in,
      then the fpregs_state is still valid which means switch_fpu_return()
      does nothing and just clears TIF_NEED_FPU_LOAD. Back to user space with
      DEFAULT_PKRU loaded. -> FAIL #2!
      
      The fix is simple: if get_xsave_addr() returns NULL then set the
      PKRU value to 0 instead of the restrictive default PKRU value in
      init_pkru_value.
      
       [ bp: Massage in minor nitpicks from folks. ]
      
      Fixes: 0cecca9d ("x86/fpu: Eager switch PKRU state")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NRik van Riel <riel@surriel.com>
      Tested-by: NBabu Moger <babu.moger@amd.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210608144346.045616965@linutronix.de
      510b80a6
    • T
      x86/process: Check PF_KTHREAD and not current->mm for kernel threads · 12f7764a
      Thomas Gleixner 提交于
      switch_fpu_finish() checks current->mm as indicator for kernel threads.
      That's wrong because kernel threads can temporarily use a mm of a user
      process via kthread_use_mm().
      
      Check the task flags for PF_KTHREAD instead.
      
      Fixes: 0cecca9d ("x86/fpu: Eager switch PKRU state")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NRik van Riel <riel@surriel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210608144345.912645927@linutronix.de
      12f7764a
    • A
      x86/fpu: Invalidate FPU state after a failed XRSTOR from a user buffer · d8778e39
      Andy Lutomirski 提交于
      Both Intel and AMD consider it to be architecturally valid for XRSTOR to
      fail with #PF but nonetheless change the register state.  The actual
      conditions under which this might occur are unclear [1], but it seems
      plausible that this might be triggered if one sibling thread unmaps a page
      and invalidates the shared TLB while another sibling thread is executing
      XRSTOR on the page in question.
      
      __fpu__restore_sig() can execute XRSTOR while the hardware registers
      are preserved on behalf of a different victim task (using the
      fpu_fpregs_owner_ctx mechanism), and, in theory, XRSTOR could fail but
      modify the registers.
      
      If this happens, then there is a window in which __fpu__restore_sig()
      could schedule out and the victim task could schedule back in without
      reloading its own FPU registers. This would result in part of the FPU
      state that __fpu__restore_sig() was attempting to load leaking into the
      victim task's user-visible state.
      
      Invalidate preserved FPU registers on XRSTOR failure to prevent this
      situation from corrupting any state.
      
      [1] Frequent readers of the errata lists might imagine "complex
          microarchitectural conditions".
      
      Fixes: 1d731e73 ("x86/fpu: Add a fastpath to __fpu__restore_sig()")
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NRik van Riel <riel@surriel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210608144345.758116583@linutronix.de
      d8778e39
    • T
      x86/fpu: Prevent state corruption in __fpu__restore_sig() · 484cea4f
      Thomas Gleixner 提交于
      The non-compacted slowpath uses __copy_from_user() and copies the entire
      user buffer into the kernel buffer, verbatim.  This means that the kernel
      buffer may now contain entirely invalid state on which XRSTOR will #GP.
      validate_user_xstate_header() can detect some of that corruption, but that
      leaves the onus on callers to clear the buffer.
      
      Prior to XSAVES support, it was possible just to reinitialize the buffer,
      completely, but with supervisor states that is not longer possible as the
      buffer clearing code split got it backwards. Fixing that is possible but
      not corrupting the state in the first place is more robust.
      
      Avoid corruption of the kernel XSAVE buffer by using copy_user_to_xstate()
      which validates the XSAVE header contents before copying the actual states
      to the kernel. copy_user_to_xstate() was previously only called for
      compacted-format kernel buffers, but it works for both compacted and
      non-compacted forms.
      
      Using it for the non-compacted form is slower because of multiple
      __copy_from_user() operations, but that cost is less important than robust
      code in an already slow path.
      
      [ Changelog polished by Dave Hansen ]
      
      Fixes: b860eb8d ("x86/fpu/xstate: Define new functions for clearing fpregs and xstates")
      Reported-by: syzbot+2067e764dbcd10721e2e@syzkaller.appspotmail.com
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NRik van Riel <riel@surriel.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20210608144345.611833074@linutronix.de
      484cea4f
    • L
      KVM: x86: Unload MMU on guest TLB flush if TDP disabled to force MMU sync · b53e84ee
      Lai Jiangshan 提交于
      When using shadow paging, unload the guest MMU when emulating a guest TLB
      flush to ensure all roots are synchronized.  From the guest's perspective,
      flushing the TLB ensures any and all modifications to its PTEs will be
      recognized by the CPU.
      
      Note, unloading the MMU is overkill, but is done to mirror KVM's existing
      handling of INVPCID(all) and ensure the bug is squashed.  Future cleanup
      can be done to more precisely synchronize roots when servicing a guest
      TLB flush.
      
      If TDP is enabled, synchronizing the MMU is unnecessary even if nested
      TDP is in play, as a "legacy" TLB flush from L1 does not invalidate L1's
      TDP mappings.  For EPT, an explicit INVEPT is required to invalidate
      guest-physical mappings; for NPT, guest mappings are always tagged with
      an ASID and thus can only be invalidated via the VMCB's ASID control.
      
      This bug has existed since the introduction of KVM_VCPU_FLUSH_TLB.
      It was only recently exposed after Linux guests stopped flushing the
      local CPU's TLB prior to flushing remote TLBs (see commit 4ce94eab,
      "x86/mm/tlb: Flush remote and local TLBs concurrently"), but is also
      visible in Windows 10 guests.
      Tested-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Fixes: f38a7b75 ("KVM: X86: support paravirtualized help for TLB shootdowns")
      Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com>
      [sean: massaged comment and changelog]
      Message-Id: <20210531172256.2908-1-jiangshanlai@gmail.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b53e84ee
    • S
      KVM: x86: Ensure liveliness of nested VM-Enter fail tracepoint message · f31500b0
      Sean Christopherson 提交于
      Use the __string() machinery provided by the tracing subystem to make a
      copy of the string literals consumed by the "nested VM-Enter failed"
      tracepoint.  A complete copy is necessary to ensure that the tracepoint
      can't outlive the data/memory it consumes and deference stale memory.
      
      Because the tracepoint itself is defined by kvm, if kvm-intel and/or
      kvm-amd are built as modules, the memory holding the string literals
      defined by the vendor modules will be freed when the module is unloaded,
      whereas the tracepoint and its data in the ring buffer will live until
      kvm is unloaded (or "indefinitely" if kvm is built-in).
      
      This bug has existed since the tracepoint was added, but was recently
      exposed by a new check in tracing to detect exactly this type of bug.
      
        fmt: '%s%s
        ' current_buffer: ' vmx_dirty_log_t-140127  [003] ....  kvm_nested_vmenter_failed: '
        WARNING: CPU: 3 PID: 140134 at kernel/trace/trace.c:3759 trace_check_vprintf+0x3be/0x3e0
        CPU: 3 PID: 140134 Comm: less Not tainted 5.13.0-rc1-ce2e73ce600a-req #184
        Hardware name: ASUS Q87M-E/Q87M-E, BIOS 1102 03/03/2014
        RIP: 0010:trace_check_vprintf+0x3be/0x3e0
        Code: <0f> 0b 44 8b 4c 24 1c e9 a9 fe ff ff c6 44 02 ff 00 49 8b 97 b0 20
        RSP: 0018:ffffa895cc37bcb0 EFLAGS: 00010282
        RAX: 0000000000000000 RBX: ffffa895cc37bd08 RCX: 0000000000000027
        RDX: 0000000000000027 RSI: 00000000ffffdfff RDI: ffff9766cfad74f8
        RBP: ffffffffc0a041d4 R08: ffff9766cfad74f0 R09: ffffa895cc37bad8
        R10: 0000000000000001 R11: 0000000000000001 R12: ffffffffc0a041d4
        R13: ffffffffc0f4dba8 R14: 0000000000000000 R15: ffff976409f2c000
        FS:  00007f92fa200740(0000) GS:ffff9766cfac0000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 0000559bd11b0000 CR3: 000000019fbaa002 CR4: 00000000001726e0
        Call Trace:
         trace_event_printf+0x5e/0x80
         trace_raw_output_kvm_nested_vmenter_failed+0x3a/0x60 [kvm]
         print_trace_line+0x1dd/0x4e0
         s_show+0x45/0x150
         seq_read_iter+0x2d5/0x4c0
         seq_read+0x106/0x150
         vfs_read+0x98/0x180
         ksys_read+0x5f/0xe0
         do_syscall_64+0x40/0xb0
         entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Fixes: 380e0055 ("KVM: nVMX: trace nested VM-Enter failures detected by H/W")
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Reviewed-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Message-Id: <20210607175748.674002-1-seanjc@google.com>
      f31500b0
    • L
      KVM: x86: Ensure PV TLB flush tracepoint reflects KVM behavior · af3511ff
      Lai Jiangshan 提交于
      In record_steal_time(), st->preempted is read twice, and
      trace_kvm_pv_tlb_flush() might output result inconsistent if
      kvm_vcpu_flush_tlb_guest() see a different st->preempted later.
      
      It is a very trivial problem and hardly has actual harm and can be
      avoided by reseting and reading st->preempted in atomic way via xchg().
      Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com>
      
      Message-Id: <20210531174628.10265-1-jiangshanlai@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      af3511ff
    • L
      KVM: X86: MMU: Use the correct inherited permissions to get shadow page · b1bd5cba
      Lai Jiangshan 提交于
      When computing the access permissions of a shadow page, use the effective
      permissions of the walk up to that point, i.e. the logic AND of its parents'
      permissions.  Two guest PxE entries that point at the same table gfn need to
      be shadowed with different shadow pages if their parents' permissions are
      different.  KVM currently uses the effective permissions of the last
      non-leaf entry for all non-leaf entries.  Because all non-leaf SPTEs have
      full ("uwx") permissions, and the effective permissions are recorded only
      in role.access and merged into the leaves, this can lead to incorrect
      reuse of a shadow page and eventually to a missing guest protection page
      fault.
      
      For example, here is a shared pagetable:
      
         pgd[]   pud[]        pmd[]            virtual address pointers
                           /->pmd1(u--)->pte1(uw-)->page1 <- ptr1 (u--)
              /->pud1(uw-)--->pmd2(uw-)->pte2(uw-)->page2 <- ptr2 (uw-)
         pgd-|           (shared pmd[] as above)
              \->pud2(u--)--->pmd1(u--)->pte1(uw-)->page1 <- ptr3 (u--)
                           \->pmd2(uw-)->pte2(uw-)->page2 <- ptr4 (u--)
      
        pud1 and pud2 point to the same pmd table, so:
        - ptr1 and ptr3 points to the same page.
        - ptr2 and ptr4 points to the same page.
      
      (pud1 and pud2 here are pud entries, while pmd1 and pmd2 here are pmd entries)
      
      - First, the guest reads from ptr1 first and KVM prepares a shadow
        page table with role.access=u--, from ptr1's pud1 and ptr1's pmd1.
        "u--" comes from the effective permissions of pgd, pud1 and
        pmd1, which are stored in pt->access.  "u--" is used also to get
        the pagetable for pud1, instead of "uw-".
      
      - Then the guest writes to ptr2 and KVM reuses pud1 which is present.
        The hypervisor set up a shadow page for ptr2 with pt->access is "uw-"
        even though the pud1 pmd (because of the incorrect argument to
        kvm_mmu_get_page in the previous step) has role.access="u--".
      
      - Then the guest reads from ptr3.  The hypervisor reuses pud1's
        shadow pmd for pud2, because both use "u--" for their permissions.
        Thus, the shadow pmd already includes entries for both pmd1 and pmd2.
      
      - At last, the guest writes to ptr4.  This causes no vmexit or pagefault,
        because pud1's shadow page structures included an "uw-" page even though
        its role.access was "u--".
      
      Any kind of shared pagetable might have the similar problem when in
      virtual machine without TDP enabled if the permissions are different
      from different ancestors.
      
      In order to fix the problem, we change pt->access to be an array, and
      any access in it will not include permissions ANDed from child ptes.
      
      The test code is: https://lore.kernel.org/kvm/20210603050537.19605-1-jiangshanlai@gmail.com/
      Remember to test it with TDP disabled.
      
      The problem had existed long before the commit 41074d07 ("KVM: MMU:
      Fix inherited permissions for emulated guest pte updates"), and it
      is hard to find which is the culprit.  So there is no fixes tag here.
      Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com>
      Message-Id: <20210603052455.21023-1-jiangshanlai@gmail.com>
      Cc: stable@vger.kernel.org
      Fixes: cea0f0e7 ("[PATCH] KVM: MMU: Shadow page table caching")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b1bd5cba
    • W
      KVM: LAPIC: Write 0 to TMICT should also cancel vmx-preemption timer · e898da78
      Wanpeng Li 提交于
      According to the SDM 10.5.4.1:
      
        A write of 0 to the initial-count register effectively stops the local
        APIC timer, in both one-shot and periodic mode.
      
      However, the lapic timer oneshot/periodic mode which is emulated by vmx-preemption
      timer doesn't stop by writing 0 to TMICT since vmx->hv_deadline_tsc is still
      programmed and the guest will receive the spurious timer interrupt later. This
      patch fixes it by also cancelling the vmx-preemption timer when writing 0 to
      the initial-count register.
      Reviewed-by: NSean Christopherson <seanjc@google.com>
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Message-Id: <1623050385-100988-1-git-send-email-wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e898da78
    • A
      KVM: SVM: Fix SEV SEND_START session length & SEND_UPDATE_DATA query length... · 4f13d471
      Ashish Kalra 提交于
      KVM: SVM: Fix SEV SEND_START session length & SEND_UPDATE_DATA query length after commit 238eca82
      
      Commit 238eca82 ("KVM: SVM: Allocate SEV command structures on local stack")
      uses the local stack to allocate the structures used to communicate with the PSP,
      which were earlier being kzalloced. This breaks SEV live migration for
      computing the SEND_START session length and SEND_UPDATE_DATA query length as
      session_len and trans_len and hdr_len fields are not zeroed respectively for
      the above commands before issuing the SEV Firmware API call, hence the
      firmware returns incorrect session length and update data header or trans length.
      
      Also the SEV Firmware API returns SEV_RET_INVALID_LEN firmware error
      for these length query API calls, and the return value and the
      firmware error needs to be passed to the userspace as it is, so
      need to remove the return check in the KVM code.
      Signed-off-by: NAshish Kalra <ashish.kalra@amd.com>
      Message-Id: <20210607061532.27459-1-Ashish.Kalra@amd.com>
      Fixes: 238eca82 ("KVM: SVM: Allocate SEV command structures on local stack")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4f13d471
  10. 08 6月, 2021 1 次提交
    • T
      x86/ioremap: Map EFI-reserved memory as encrypted for SEV · 8d651ee9
      Tom Lendacky 提交于
      Some drivers require memory that is marked as EFI boot services
      data. In order for this memory to not be re-used by the kernel
      after ExitBootServices(), efi_mem_reserve() is used to preserve it
      by inserting a new EFI memory descriptor and marking it with the
      EFI_MEMORY_RUNTIME attribute.
      
      Under SEV, memory marked with the EFI_MEMORY_RUNTIME attribute needs to
      be mapped encrypted by Linux, otherwise the kernel might crash at boot
      like below:
      
        EFI Variables Facility v0.08 2004-May-17
        general protection fault, probably for non-canonical address 0x3597688770a868b2: 0000 [#1] SMP NOPTI
        CPU: 13 PID: 1 Comm: swapper/0 Not tainted 5.12.4-2-default #1 openSUSE Tumbleweed
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
        RIP: 0010:efi_mokvar_entry_next
        [...]
        Call Trace:
         efi_mokvar_sysfs_init
         ? efi_mokvar_table_init
         do_one_initcall
         ? __kmalloc
         kernel_init_freeable
         ? rest_init
         kernel_init
         ret_from_fork
      
      Expand the __ioremap_check_other() function to additionally check for
      this other type of boot data reserved at runtime and indicate that it
      should be mapped encrypted for an SEV guest.
      
       [ bp: Massage commit message. ]
      
      Fixes: 58c90902 ("efi: Support for MOK variable config table")
      Reported-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NJoerg Roedel <jroedel@suse.de>
      Cc: <stable@vger.kernel.org> # 5.10+
      Link: https://lkml.kernel.org/r/20210608095439.12668-2-joro@8bytes.org
      8d651ee9
  11. 05 6月, 2021 1 次提交
    • P
      x86/sev: Check SME/SEV support in CPUID first · 009767db
      Pu Wen 提交于
      The first two bits of the CPUID leaf 0x8000001F EAX indicate whether SEV
      or SME is supported, respectively. It's better to check whether SEV or
      SME is actually supported before accessing the MSR_AMD64_SEV to check
      whether SEV or SME is enabled.
      
      This is both a bare-metal issue and a guest/VM issue. Since the first
      generation Hygon Dhyana CPU doesn't support the MSR_AMD64_SEV, reading that
      MSR results in a #GP - either directly from hardware in the bare-metal
      case or via the hypervisor (because the RDMSR is actually intercepted)
      in the guest/VM case, resulting in a failed boot. And since this is very
      early in the boot phase, rdmsrl_safe()/native_read_msr_safe() can't be
      used.
      
      So check the CPUID bits first, before accessing the MSR.
      
       [ tlendacky: Expand and improve commit message. ]
       [ bp: Massage commit message. ]
      
      Fixes: eab696d8 ("x86/sev: Do not require Hypervisor CPUID bit for SEV guests")
      Signed-off-by: NPu Wen <puwen@hygon.cn>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NTom Lendacky <thomas.lendacky@amd.com>
      Cc: <stable@vger.kernel.org> # v5.10+
      Link: https://lkml.kernel.org/r/20210602070207.2480-1-puwen@hygon.cn
      009767db
  12. 04 6月, 2021 2 次提交
    • J
      x86/fault: Don't send SIGSEGV twice on SEGV_PKUERR · 5405b42c
      Jiashuo Liang 提交于
      __bad_area_nosemaphore() calls both force_sig_pkuerr() and
      force_sig_fault() when handling SEGV_PKUERR. This does not cause
      problems because the second signal is filtered by the legacy_queue()
      check in __send_signal() because in both cases, the signal is SIGSEGV,
      the second one seeing that the first one is already pending.
      
      This causes the kernel to do unnecessary work so send the signal only
      once for SEGV_PKUERR.
      
       [ bp: Massage commit message. ]
      
      Fixes: 9db812db ("signal/x86: Call force_sig_pkuerr from __bad_area_nosemaphore")
      Suggested-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NJiashuo Liang <liangjs@pku.edu.cn>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Link: https://lkml.kernel.org/r/20210601085203.40214-1-liangjs@pku.edu.cn
      5405b42c
    • M
      x86/setup: Always reserve the first 1M of RAM · f1d4d47c
      Mike Rapoport 提交于
      There are BIOSes that are known to corrupt the memory under 1M, or more
      precisely under 640K because the memory above 640K is anyway reserved
      for the EGA/VGA frame buffer and BIOS.
      
      To prevent usage of the memory that will be potentially clobbered by the
      kernel, the beginning of the memory is always reserved. The exact size
      of the reserved area is determined by CONFIG_X86_RESERVE_LOW build time
      and the "reservelow=" command line option. The reserved range may be
      from 4K to 640K with the default of 64K. There are also configurations
      that reserve the entire 1M range, like machines with SandyBridge graphic
      devices or systems that enable crash kernel.
      
      In addition to the potentially clobbered memory, EBDA of unknown size may
      be as low as 128K and the memory above that EBDA start is also reserved
      early.
      
      It would have been possible to reserve the entire range under 1M unless for
      the real mode trampoline that must reside in that area.
      
      To accommodate placement of the real mode trampoline and keep the memory
      safe from being clobbered by BIOS, reserve the first 64K of RAM before
      memory allocations are possible and then, after the real mode trampoline
      is allocated, reserve the entire range from 0 to 1M.
      
      Update trim_snb_memory() and reserve_real_mode() to avoid redundant
      reservations of the same memory range.
      
      Also make sure the memory under 1M is not getting freed by
      efi_free_boot_services().
      
       [ bp: Massage commit message and comments. ]
      
      Fixes: a799c2bd ("x86/setup: Consolidate early memory reservations")
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Tested-by: NHugh Dickins <hughd@google.com>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=213177
      Link: https://lkml.kernel.org/r/20210601075354.5149-2-rppt@kernel.org
      f1d4d47c
  13. 03 6月, 2021 2 次提交
    • B
      x86/alternative: Optimize single-byte NOPs at an arbitrary position · 2b31e8ed
      Borislav Petkov 提交于
      Up until now the assumption was that an alternative patching site would
      have some instructions at the beginning and trailing single-byte NOPs
      (0x90) padding. Therefore, the patching machinery would go and optimize
      those single-byte NOPs into longer ones.
      
      However, this assumption is broken on 32-bit when code like
      hv_do_hypercall() in hyperv_init() would use the ratpoline speculation
      killer CALL_NOSPEC. The 32-bit version of that macro would align certain
      insns to 16 bytes, leading to the compiler issuing a one or more
      single-byte NOPs, depending on the holes it needs to fill for alignment.
      
      That would lead to the warning in optimize_nops() to fire:
      
        ------------[ cut here ]------------
        Not a NOP at 0xc27fb598
         WARNING: CPU: 0 PID: 0 at arch/x86/kernel/alternative.c:211 optimize_nops.isra.13
      
      due to that function verifying whether all of the following bytes really
      are single-byte NOPs.
      
      Therefore, carve out the NOP padding into a separate function and call
      it for each NOP range beginning with a single-byte NOP.
      
      Fixes: 23c1ad53 ("x86/alternatives: Optimize optimize_nops()")
      Reported-by: NRichard Narron <richard@aaazen.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=213301
      Link: https://lkml.kernel.org/r/20210601212125.17145-1-bp@alien8.de
      2b31e8ed
    • T
      x86/cpufeatures: Force disable X86_FEATURE_ENQCMD and remove update_pasid() · 9bfecd05
      Thomas Gleixner 提交于
      While digesting the XSAVE-related horrors which got introduced with
      the supervisor/user split, the recent addition of ENQCMD-related
      functionality got on the radar and turned out to be similarly broken.
      
      update_pasid(), which is only required when X86_FEATURE_ENQCMD is
      available, is invoked from two places:
      
       1) From switch_to() for the incoming task
      
       2) Via a SMP function call from the IOMMU/SMV code
      
      #1 is half-ways correct as it hacks around the brokenness of get_xsave_addr()
         by enforcing the state to be 'present', but all the conditionals in that
         code are completely pointless for that.
      
         Also the invocation is just useless overhead because at that point
         it's guaranteed that TIF_NEED_FPU_LOAD is set on the incoming task
         and all of this can be handled at return to user space.
      
      #2 is broken beyond repair. The comment in the code claims that it is safe
         to invoke this in an IPI, but that's just wishful thinking.
      
         FPU state of a running task is protected by fregs_lock() which is
         nothing else than a local_bh_disable(). As BH-disabled regions run
         usually with interrupts enabled the IPI can hit a code section which
         modifies FPU state and there is absolutely no guarantee that any of the
         assumptions which are made for the IPI case is true.
      
         Also the IPI is sent to all CPUs in mm_cpumask(mm), but the IPI is
         invoked with a NULL pointer argument, so it can hit a completely
         unrelated task and unconditionally force an update for nothing.
         Worse, it can hit a kernel thread which operates on a user space
         address space and set a random PASID for it.
      
      The offending commit does not cleanly revert, but it's sufficient to
      force disable X86_FEATURE_ENQCMD and to remove the broken update_pasid()
      code to make this dysfunctional all over the place. Anything more
      complex would require more surgery and none of the related functions
      outside of the x86 core code are blatantly wrong, so removing those
      would be overkill.
      
      As nothing enables the PASID bit in the IA32_XSS MSR yet, which is
      required to make this actually work, this cannot result in a regression
      except for related out of tree train-wrecks, but they are broken already
      today.
      
      Fixes: 20f0afd1 ("x86/mmu: Allocate/free a PASID")
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NAndy Lutomirski <luto@kernel.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/87mtsd6gr9.ffs@nanos.tec.linutronix.de
      9bfecd05
  14. 01 6月, 2021 1 次提交