1. 13 11月, 2019 1 次提交
  2. 16 9月, 2019 1 次提交
    • B
      kvm: mmu: Fix overflow on kvm mmu page limit calculation · 163b24b1
      Ben Gardon 提交于
      [ Upstream commit bc8a3d8925a8fa09fa550e0da115d95851ce33c6 ]
      
      KVM bases its memory usage limits on the total number of guest pages
      across all memslots. However, those limits, and the calculations to
      produce them, use 32 bit unsigned integers. This can result in overflow
      if a VM has more guest pages that can be represented by a u32. As a
      result of this overflow, KVM can use a low limit on the number of MMU
      pages it will allocate. This makes KVM unable to map all of guest memory
      at once, prompting spurious faults.
      
      Tested: Ran all kvm-unit-tests on an Intel Haswell machine. This patch
      	introduced no new failures.
      Signed-off-by: NBen Gardon <bgardon@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      163b24b1
  3. 07 8月, 2019 1 次提交
    • A
      x86: kvm: avoid constant-conversion warning · 80f58147
      Arnd Bergmann 提交于
      [ Upstream commit a6a6d3b1f867d34ba5bd61aa7bb056b48ca67cff ]
      
      clang finds a contruct suspicious that converts an unsigned
      character to a signed integer and back, causing an overflow:
      
      arch/x86/kvm/mmu.c:4605:39: error: implicit conversion from 'int' to 'u8' (aka 'unsigned char') changes value from -205 to 51 [-Werror,-Wconstant-conversion]
                      u8 wf = (pfec & PFERR_WRITE_MASK) ? ~w : 0;
                         ~~                               ^~
      arch/x86/kvm/mmu.c:4607:38: error: implicit conversion from 'int' to 'u8' (aka 'unsigned char') changes value from -241 to 15 [-Werror,-Wconstant-conversion]
                      u8 uf = (pfec & PFERR_USER_MASK) ? ~u : 0;
                         ~~                              ^~
      arch/x86/kvm/mmu.c:4609:39: error: implicit conversion from 'int' to 'u8' (aka 'unsigned char') changes value from -171 to 85 [-Werror,-Wconstant-conversion]
                      u8 ff = (pfec & PFERR_FETCH_MASK) ? ~x : 0;
                         ~~                               ^~
      
      Add an explicit cast to tell clang that everything works as
      intended here.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Link: https://github.com/ClangBuiltLinux/linux/issues/95Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      80f58147
  4. 03 7月, 2019 1 次提交
  5. 24 3月, 2019 2 次提交
    • S
      KVM: x86/mmu: Detect MMIO generation wrap in any address space · 656e9e5d
      Sean Christopherson 提交于
      commit e1359e2beb8b0a1188abc997273acbaedc8ee791 upstream.
      
      The check to detect a wrap of the MMIO generation explicitly looks for a
      generation number of zero.  Now that unique memslots generation numbers
      are assigned to each address space, only address space 0 will get a
      generation number of exactly zero when wrapping.  E.g. when address
      space 1 goes from 0x7fffe to 0x80002, the MMIO generation number will
      wrap to 0x2.  Adjust the MMIO generation to strip the address space
      modifier prior to checking for a wrap.
      
      Fixes: 4bd518f1 ("KVM: use separate generations for each address space")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      656e9e5d
    • S
      KVM: Call kvm_arch_memslots_updated() before updating memslots · 23ad135a
      Sean Christopherson 提交于
      commit 152482580a1b0accb60676063a1ac57b2d12daf6 upstream.
      
      kvm_arch_memslots_updated() is at this point in time an x86-specific
      hook for handling MMIO generation wraparound.  x86 stashes 19 bits of
      the memslots generation number in its MMIO sptes in order to avoid
      full page fault walks for repeat faults on emulated MMIO addresses.
      Because only 19 bits are used, wrapping the MMIO generation number is
      possible, if unlikely.  kvm_arch_memslots_updated() alerts x86 that
      the generation has changed so that it can invalidate all MMIO sptes in
      case the effective MMIO generation has wrapped so as to avoid using a
      stale spte, e.g. a (very) old spte that was created with generation==0.
      
      Given that the purpose of kvm_arch_memslots_updated() is to prevent
      consuming stale entries, it needs to be called before the new generation
      is propagated to memslots.  Invalidating the MMIO sptes after updating
      memslots means that there is a window where a vCPU could dereference
      the new memslots generation, e.g. 0, and incorrectly reuse an old MMIO
      spte that was created with (pre-wrap) generation==0.
      
      Fixes: e59dbe09 ("KVM: Introduce kvm_arch_memslots_updated()")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      23ad135a
  6. 06 12月, 2018 1 次提交
    • J
      kvm: mmu: Fix race in emulated page table writes · 471aca57
      Junaid Shahid 提交于
      commit 0e0fee5c539b61fdd098332e0e2cc375d9073706 upstream.
      
      When a guest page table is updated via an emulated write,
      kvm_mmu_pte_write() is called to update the shadow PTE using the just
      written guest PTE value. But if two emulated guest PTE writes happened
      concurrently, it is possible that the guest PTE and the shadow PTE end
      up being out of sync. Emulated writes do not mark the shadow page as
      unsync-ed, so this inconsistency will not be resolved even by a guest TLB
      flush (unless the page was marked as unsync-ed at some other point).
      
      This is fixed by re-reading the current value of the guest PTE after the
      MMU lock has been acquired instead of just using the value that was
      written prior to calling kvm_mmu_pte_write().
      Signed-off-by: NJunaid Shahid <junaids@google.com>
      Reviewed-by: NWanpeng Li <wanpengli@tencent.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      471aca57
  7. 01 10月, 2018 1 次提交
    • S
      KVM: x86: fix L1TF's MMIO GFN calculation · daa07cbc
      Sean Christopherson 提交于
      One defense against L1TF in KVM is to always set the upper five bits
      of the *legal* physical address in the SPTEs for non-present and
      reserved SPTEs, e.g. MMIO SPTEs.  In the MMIO case, the GFN of the
      MMIO SPTE may overlap with the upper five bits that are being usurped
      to defend against L1TF.  To preserve the GFN, the bits of the GFN that
      overlap with the repurposed bits are shifted left into the reserved
      bits, i.e. the GFN in the SPTE will be split into high and low parts.
      When retrieving the GFN from the MMIO SPTE, e.g. to check for an MMIO
      access, get_mmio_spte_gfn() unshifts the affected bits and restores
      the original GFN for comparison.  Unfortunately, get_mmio_spte_gfn()
      neglects to mask off the reserved bits in the SPTE that were used to
      store the upper chunk of the GFN.  As a result, KVM fails to detect
      MMIO accesses whose GPA overlaps the repurprosed bits, which in turn
      causes guest panics and hangs.
      
      Fix the bug by generating a mask that covers the lower chunk of the
      GFN, i.e. the bits that aren't shifted by the L1TF mitigation.  The
      alternative approach would be to explicitly zero the five reserved
      bits that are used to store the upper chunk of the GFN, but that
      requires additional run-time computation and makes an already-ugly
      bit of code even more inscrutable.
      
      I considered adding a WARN_ON_ONCE(low_phys_bits-1 <= PAGE_SHIFT) to
      warn if GENMASK_ULL() generated a nonsensical value, but that seemed
      silly since that would mean a system that supports VMX has less than
      18 bits of physical address space...
      Reported-by: NSakari Ailus <sakari.ailus@iki.fi>
      Fixes: d9b47449c1a1 ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
      Cc: Junaid Shahid <junaids@google.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: NJunaid Shahid <junaids@google.com>
      Tested-by: NSakari Ailus <sakari.ailus@linux.intel.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      daa07cbc
  8. 20 9月, 2018 2 次提交
  9. 07 9月, 2018 1 次提交
  10. 30 8月, 2018 4 次提交
    • S
      KVM: x86: Do not re-{try,execute} after failed emulation in L2 · 6c3dfeb6
      Sean Christopherson 提交于
      Commit a6f177ef ("KVM: Reenter guest after emulation failure if
      due to access to non-mmio address") added reexecute_instruction() to
      handle the scenario where two (or more) vCPUS race to write a shadowed
      page, i.e. reexecute_instruction() is intended to return true if and
      only if the instruction being emulated was accessing a shadowed page.
      As L0 is only explicitly shadowing L1 tables, an emulation failure of
      a nested VM instruction cannot be due to a race to write a shadowed
      page and so should never be re-executed.
      
      This fixes an issue where an "MMIO" emulation failure[1] in L2 is all
      but guaranteed to result in an infinite loop when TDP is enabled.
      Because "cr2" is actually an L2 GPA when TDP is enabled, calling
      kvm_mmu_gva_to_gpa_write() to translate cr2 in the non-direct mapped
      case (L2 is never direct mapped) will almost always yield UNMAPPED_GVA
      and cause reexecute_instruction() to immediately return true.  The
      !mmio_info_in_cache() check in kvm_mmu_page_fault() doesn't catch this
      case because mmio_info_in_cache() returns false for a nested MMU (the
      MMIO caching currently handles L1 only, e.g. to cache nested guests'
      GPAs we'd have to manually flush the cache when switching between
      VMs and when L1 updated its page tables controlling the nested guest).
      
      Way back when, commit 68be0803 ("KVM: x86: never re-execute
      instruction with enabled tdp") changed reexecute_instruction() to
      always return false when using TDP under the assumption that KVM would
      only get into the emulator for MMIO.  Commit 95b3cf69 ("KVM: x86:
      let reexecute_instruction work for tdp") effectively reverted that
      behavior in order to handle the scenario where emulation failed due to
      an access from L1 to the shadow page tables for L2, but it didn't
      account for the case where emulation failed in L2 with TDP enabled.
      
      All of the above logic also applies to retry_instruction(), added by
      commit 1cb3f3ae ("KVM: x86: retry non-page-table writing
      instructions").  An indefinite loop in retry_instruction() should be
      impossible as it protects against retrying the same instruction over
      and over, but it's still correct to not retry an L2 instruction in
      the first place.
      
      Fix the immediate issue by adding a check for a nested guest when
      determining whether or not to allow retry in kvm_mmu_page_fault().
      In addition to fixing the immediate bug, add WARN_ON_ONCE in the
      retry functions since they are not designed to handle nested cases,
      i.e. they need to be modified even if there is some scenario in the
      future where we want to allow retrying a nested guest.
      
      [1] This issue was encountered after commit 3a2936de ("kvm: mmu:
          Don't expose private memslots to L2") changed the page fault path
          to return KVM_PFN_NOSLOT when translating an L2 access to a
          prive memslot.  Returning KVM_PFN_NOSLOT is semantically correct
          when we want to hide a memslot from L2, i.e. there effectively is
          no defined memory region for L2, but it has the unfortunate side
          effect of making KVM think the GFN is a MMIO page, thus triggering
          emulation.  The failure occurred with in-development code that
          deliberately exposed a private memslot to L2, which L2 accessed
          with an instruction that is not emulated by KVM.
      
      Fixes: 95b3cf69 ("KVM: x86: let reexecute_instruction work for tdp")
      Fixes: 1cb3f3ae ("KVM: x86: retry non-page-table writing instructions")
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Krish Sadhukhan <krish.sadhukhan@oracle.com>
      Cc: Xiao Guangrong <xiaoguangrong@tencent.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      6c3dfeb6
    • S
      KVM: x86: Default to not allowing emulation retry in kvm_mmu_page_fault · 472faffa
      Sean Christopherson 提交于
      Effectively force kvm_mmu_page_fault() to opt-in to allowing retry to
      make it more obvious when and why it allows emulation to be retried.
      Previously this approach was less convenient due to retry and
      re-execute behavior being controlled by separate flags that were also
      inverted in their implementations (opt-in versus opt-out).
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      472faffa
    • S
      KVM: x86: Merge EMULTYPE_RETRY and EMULTYPE_ALLOW_REEXECUTE · 384bf221
      Sean Christopherson 提交于
      retry_instruction() and reexecute_instruction() are a package deal,
      i.e. there is no scenario where one is allowed and the other is not.
      Merge their controlling emulation type flags to enforce this in code.
      Name the combined flag EMULTYPE_ALLOW_RETRY to make it abundantly
      clear that we are allowing re{try,execute} to occur, as opposed to
      explicitly requesting retry of a previously failed instruction.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      384bf221
    • S
      KVM: x86: Invert emulation re-execute behavior to make it opt-in · 8065dbd1
      Sean Christopherson 提交于
      Re-execution of an instruction after emulation decode failure is
      intended to be used only when emulating shadow page accesses.  Invert
      the flag to make allowing re-execution opt-in since that behavior is
      by far in the minority.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      8065dbd1
  11. 15 8月, 2018 1 次提交
  12. 06 8月, 2018 19 次提交
  13. 27 7月, 2018 1 次提交
  14. 05 7月, 2018 1 次提交
    • P
      x86/KVM/VMX: Add L1D flush logic · c595ceee
      Paolo Bonzini 提交于
      Add the logic for flushing L1D on VMENTER. The flush depends on the static
      key being enabled and the new l1tf_flush_l1d flag being set.
      
      The flags is set:
       - Always, if the flush module parameter is 'always'
      
       - Conditionally at:
         - Entry to vcpu_run(), i.e. after executing user space
      
         - From the sched_in notifier, i.e. when switching to a vCPU thread.
      
         - From vmexit handlers which are considered unsafe, i.e. where
           sensitive data can be brought into L1D:
      
           - The emulator, which could be a good target for other speculative
             execution-based threats,
      
           - The MMU, which can bring host page tables in the L1 cache.
           
           - External interrupts
      
           - Nested operations that require the MMU (see above). That is
             vmptrld, vmptrst, vmclear,vmwrite,vmread.
      
           - When handling invept,invvpid
      
      [ tglx: Split out from combo patch and reduced to a single flag ]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c595ceee
  15. 25 5月, 2018 1 次提交
  16. 15 5月, 2018 2 次提交