1. 23 10月, 2020 2 次提交
  2. 22 10月, 2020 10 次提交
  3. 28 9月, 2020 17 次提交
  4. 12 9月, 2020 1 次提交
    • L
      kvm x86/mmu: use KVM_REQ_MMU_SYNC to sync when needed · f6f6195b
      Lai Jiangshan 提交于
      When kvm_mmu_get_page() gets a page with unsynced children, the spt
      pagetable is unsynchronized with the guest pagetable. But the
      guest might not issue a "flush" operation on it when the pagetable
      entry is changed from zero or other cases. The hypervisor has the
      responsibility to synchronize the pagetables.
      
      KVM behaved as above for many years, But commit 8c8560b8
      ("KVM: x86/mmu: Use KVM_REQ_TLB_FLUSH_CURRENT for MMU specific flushes")
      inadvertently included a line of code to change it without giving any
      reason in the changelog. It is clear that the commit's intention was to
      change KVM_REQ_TLB_FLUSH -> KVM_REQ_TLB_FLUSH_CURRENT, so we don't
      needlessly flush other contexts; however, one of the hunks changed
      a nearby KVM_REQ_MMU_SYNC instead.  This patch changes it back.
      
      Link: https://lore.kernel.org/lkml/20200320212833.3507-26-sean.j.christopherson@intel.com/
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NLai Jiangshan <laijs@linux.alibaba.com>
      Message-Id: <20200902135421.31158-1-jiangshanlai@gmail.com>
      fixes: 8c8560b8 ("KVM: x86/mmu: Use KVM_REQ_TLB_FLUSH_CURRENT for MMU specific flushes")
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f6f6195b
  5. 24 8月, 2020 1 次提交
  6. 22 8月, 2020 1 次提交
    • W
      KVM: Pass MMU notifier range flags to kvm_unmap_hva_range() · fdfe7cbd
      Will Deacon 提交于
      The 'flags' field of 'struct mmu_notifier_range' is used to indicate
      whether invalidate_range_{start,end}() are permitted to block. In the
      case of kvm_mmu_notifier_invalidate_range_start(), this field is not
      forwarded on to the architecture-specific implementation of
      kvm_unmap_hva_range() and therefore the backend cannot sensibly decide
      whether or not to block.
      
      Add an extra 'flags' parameter to kvm_unmap_hva_range() so that
      architectures are aware as to whether or not they are permitted to block.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Message-Id: <20200811102725.7121-2-will@kernel.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fdfe7cbd
  7. 31 7月, 2020 5 次提交
  8. 17 7月, 2020 1 次提交
  9. 11 7月, 2020 2 次提交
    • M
      KVM: x86: mmu: Add guest physical address check in translate_gpa() · ec7771ab
      Mohammed Gamal 提交于
      Intel processors of various generations have supported 36, 39, 46 or 52
      bits for physical addresses.  Until IceLake introduced MAXPHYADDR==52,
      running on a machine with higher MAXPHYADDR than the guest more or less
      worked, because software that relied on reserved address bits (like KVM)
      generally used bit 51 as a marker and therefore the page faults where
      generated anyway.
      
      Unfortunately this is not true anymore if the host MAXPHYADDR is 52,
      and this can cause problems when migrating from a MAXPHYADDR<52
      machine to one with MAXPHYADDR==52.  Typically, the latter are machines
      that support 5-level page tables, so they can be identified easily from
      the LA57 CPUID bit.
      
      When that happens, the guest might have a physical address with reserved
      bits set, but the host won't see that and trap it.  Hence, we need
      to check page faults' physical addresses against the guest's maximum
      physical memory and if it's exceeded, we need to add the PFERR_RSVD_MASK
      bits to the page fault error code.
      
      This patch does this for the MMU's page walks.  The next patches will
      ensure that the correct exception and error code is produced whenever
      no host-reserved bits are set in page table entries.
      Signed-off-by: NMohammed Gamal <mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20200710154811.418214-4-mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ec7771ab
    • M
      KVM: x86: mmu: Move translate_gpa() to mmu.c · cd313569
      Mohammed Gamal 提交于
      Also no point of it being inline since it's always called through
      function pointers. So remove that.
      Signed-off-by: NMohammed Gamal <mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20200710154811.418214-3-mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cd313569