1. 22 10月, 2020 6 次提交
  2. 28 9月, 2020 17 次提交
  3. 12 9月, 2020 1 次提交
    • L
      kvm x86/mmu: use KVM_REQ_MMU_SYNC to sync when needed · f6f6195b
      Lai Jiangshan 提交于
      When kvm_mmu_get_page() gets a page with unsynced children, the spt
      pagetable is unsynchronized with the guest pagetable. But the
      guest might not issue a "flush" operation on it when the pagetable
      entry is changed from zero or other cases. The hypervisor has the
      responsibility to synchronize the pagetables.
      
      KVM behaved as above for many years, But commit 8c8560b8
      ("KVM: x86/mmu: Use KVM_REQ_TLB_FLUSH_CURRENT for MMU specific flushes")
      inadvertently included a line of code to change it without giving any
      reason in the changelog. It is clear that the commit's intention was to
      change KVM_REQ_TLB_FLUSH -> KVM_REQ_TLB_FLUSH_CURRENT, so we don't
      needlessly flush other contexts; however, one of the hunks changed
      a nearby KVM_REQ_MMU_SYNC instead.  This patch changes it back.
      
      Link: https://lore.kernel.org/lkml/20200320212833.3507-26-sean.j.christopherson@intel.com/
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com>
      Message-Id: <20200902135421.31158-1-jiangshanlai@gmail.com>
      fixes: 8c8560b8 ("KVM: x86/mmu: Use KVM_REQ_TLB_FLUSH_CURRENT for MMU specific flushes")
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f6f6195b
  4. 24 8月, 2020 1 次提交
  5. 22 8月, 2020 1 次提交
    • W
      KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() · fdfe7cbd
      Will Deacon 提交于
      The 'flags' field of 'struct mmu_notifier_range' is used to indicate
      whether invalidate_range_{start,end}() are permitted to block. In the
      case of kvm_mmu_notifier_invalidate_range_start(), this field is not
      forwarded on to the architecture-specific implementation of
      kvm_unmap_hva_range() and therefore the backend cannot sensibly decide
      whether or not to block.
      
      Add an extra 'flags' parameter to kvm_unmap_hva_range() so that
      architectures are aware as to whether or not they are permitted to block.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Message-Id: <20200811102725.7121-2-will@kernel.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fdfe7cbd
  6. 31 7月, 2020 5 次提交
  7. 17 7月, 2020 1 次提交
  8. 11 7月, 2020 6 次提交
    • M
      KVM: x86: mmu: Add guest physical address check in translate_gpa() · ec7771ab
      Mohammed Gamal 提交于
      Intel processors of various generations have supported 36, 39, 46 or 52
      bits for physical addresses.  Until IceLake introduced MAXPHYADDR==52,
      running on a machine with higher MAXPHYADDR than the guest more or less
      worked, because software that relied on reserved address bits (like KVM)
      generally used bit 51 as a marker and therefore the page faults where
      generated anyway.
      
      Unfortunately this is not true anymore if the host MAXPHYADDR is 52,
      and this can cause problems when migrating from a MAXPHYADDR<52
      machine to one with MAXPHYADDR==52.  Typically, the latter are machines
      that support 5-level page tables, so they can be identified easily from
      the LA57 CPUID bit.
      
      When that happens, the guest might have a physical address with reserved
      bits set, but the host won't see that and trap it.  Hence, we need
      to check page faults' physical addresses against the guest's maximum
      physical memory and if it's exceeded, we need to add the PFERR_RSVD_MASK
      bits to the page fault error code.
      
      This patch does this for the MMU's page walks.  The next patches will
      ensure that the correct exception and error code is produced whenever
      no host-reserved bits are set in page table entries.
      Signed-off-by: NMohammed Gamal <mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20200710154811.418214-4-mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ec7771ab
    • M
      KVM: x86: mmu: Move translate_gpa() to mmu.c · cd313569
      Mohammed Gamal 提交于
      Also no point of it being inline since it's always called through
      function pointers. So remove that.
      Signed-off-by: NMohammed Gamal <mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20200710154811.418214-3-mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cd313569
    • V
      KVM: x86: drop superfluous mmu_check_root() from fast_pgd_switch() · fe9304d3
      Vitaly Kuznetsov 提交于
      The mmu_check_root() check in fast_pgd_switch() seems to be
      superfluous: when GPA is outside of the visible range
      cached_root_available() will fail for non-direct roots
      (as we can't have a matching one on the list) and we don't
      seem to care for direct ones.
      
      Also, raising #TF immediately when a non-existent GFN is written to CR3
      doesn't seem to mach architectural behavior. Drop the check.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200710141157.1640173-10-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fe9304d3
    • V
      KVM: nSVM: implement nested_svm_load_cr3() and use it for host->guest switch · a506fdd2
      Vitaly Kuznetsov 提交于
      Undesired triple fault gets injected to L1 guest on SVM when L2 is
      launched with certain CR3 values. #TF is raised by mmu_check_root()
      check in fast_pgd_switch() and the root cause is that when
      kvm_set_cr3() is called from nested_prepare_vmcb_save() with NPT
      enabled CR3 points to a nGPA so we can't check it with
      kvm_is_visible_gfn().
      
      Using generic kvm_set_cr3() when switching to nested guest is not
      a great idea as we'll have to distinguish between 'real' CR3s and
      'nested' CR3s to e.g. not call kvm_mmu_new_pgd() with nGPA. Following
      nVMX implement nested-specific nested_svm_load_cr3() doing the job.
      
      To support the change, nested_svm_load_cr3() needs to be re-ordered
      with nested_svm_init_mmu_context().
      
      Note: the current implementation is sub-optimal as we always do TLB
      flush/MMU sync but this is still an improvement as we at least stop doing
      kvm_mmu_reset_context().
      
      Fixes: 7c390d35 ("kvm: x86: Add fast CR3 switch code path")
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200710141157.1640173-8-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a506fdd2
    • P
      KVM: MMU: stop dereferencing vcpu->arch.mmu to get the context for MMU init · 8c008659
      Paolo Bonzini 提交于
      kvm_init_shadow_mmu() was actually the only function that could be called
      with different vcpu->arch.mmu values.  Now that kvm_init_shadow_npt_mmu()
      is separated from kvm_init_shadow_mmu(), we always know the MMU context
      we need to use and there is no need to dereference vcpu->arch.mmu pointer.
      
      Based on a patch by Vitaly Kuznetsov <vkuznets@redhat.com>.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200710141157.1640173-3-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8c008659
    • V
      KVM: nSVM: split kvm_init_shadow_npt_mmu() from kvm_init_shadow_mmu() · 0f04a2ac
      Vitaly Kuznetsov 提交于
      As a preparatory change for moving kvm_mmu_new_pgd() from
      nested_prepare_vmcb_save() to nested_svm_init_mmu_context() split
      kvm_init_shadow_npt_mmu() from kvm_init_shadow_mmu(). This also makes
      the code look more like nVMX (kvm_init_shadow_ept_mmu()).
      
      No functional change intended.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200710141157.1640173-2-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0f04a2ac
  9. 10 7月, 2020 2 次提交