1. 21 3月, 2022 1 次提交
  2. 08 3月, 2022 6 次提交
    • P
      KVM: x86/mmu: Zap invalidated roots via asynchronous worker · 22b94c4b
      Paolo Bonzini 提交于
      Use the system worker threads to zap the roots invalidated
      by the TDP MMU's "fast zap" mechanism, implemented by
      kvm_tdp_mmu_invalidate_all_roots().
      
      At this point, apart from allowing some parallelism in the zapping of
      roots, the workqueue is a glorified linked list: work items are added and
      flushed entirely within a single kvm->slots_lock critical section.  However,
      the workqueue fixes a latent issue where kvm_mmu_zap_all_invalidated_roots()
      assumes that it owns a reference to all invalid roots; therefore, no
      one can set the invalid bit outside kvm_mmu_zap_all_fast().  Putting the
      invalidated roots on a linked list... erm, on a workqueue ensures that
      tdp_mmu_zap_root_work() only puts back those extra references that
      kvm_mmu_zap_all_invalidated_roots() had gifted to it.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      22b94c4b
    • S
      KVM: x86/mmu: Defer TLB flush to caller when freeing TDP MMU shadow pages · bb95dfb9
      Sean Christopherson 提交于
      Defer TLB flushes to the caller when freeing TDP MMU shadow pages instead
      of immediately flushing.  Because the shadow pages are freed in an RCU
      callback, so long as at least one CPU holds RCU, all CPUs are protected.
      For vCPUs running in the guest, i.e. consuming TLB entries, KVM only
      needs to ensure the caller services the pending TLB flush before dropping
      its RCU protections.  I.e. use the caller's RCU as a proxy for all vCPUs
      running in the guest.
      
      Deferring the flushes allows batching flushes, e.g. when installing a
      1gb hugepage and zapping a pile of SPs.  And when zapping an entire root,
      deferring flushes allows skipping the flush entirely (because flushes are
      not needed in that case).
      
      Avoiding flushes when zapping an entire root is especially important as
      synchronizing with other CPUs via IPI after zapping every shadow page can
      cause significant performance issues for large VMs.  The issue is
      exacerbated by KVM zapping entire top-level entries without dropping
      RCU protection, which can lead to RCU stalls even when zapping roots
      backing relatively "small" amounts of guest memory, e.g. 2tb.  Removing
      the IPI bottleneck largely mitigates the RCU issues, though it's likely
      still a problem for 5-level paging.  A future patch will further address
      the problem by zapping roots in multiple passes to avoid holding RCU for
      an extended duration.
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220226001546.360188-20-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      bb95dfb9
    • S
      KVM: x86/mmu: Zap only TDP MMU leafs in kvm_zap_gfn_range() · cf3e2642
      Sean Christopherson 提交于
      Zap only leaf SPTEs in the TDP MMU's zap_gfn_range(), and rename various
      functions accordingly.  When removing mappings for functional correctness
      (except for the stupid VFIO GPU passthrough memslots bug), zapping the
      leaf SPTEs is sufficient as the paging structures themselves do not point
      at guest memory and do not directly impact the final translation (in the
      TDP MMU).
      
      Note, this aligns the TDP MMU with the legacy/full MMU, which zaps only
      the rmaps, a.k.a. leaf SPTEs, in kvm_zap_gfn_range() and
      kvm_unmap_gfn_range().
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20220226001546.360188-18-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cf3e2642
    • S
      KVM: x86/mmu: Document that zapping invalidated roots doesn't need to flush · 7ae5840e
      Sean Christopherson 提交于
      Remove the misleading flush "handling" when zapping invalidated TDP MMU
      roots, and document that flushing is unnecessary for all flavors of MMUs
      when zapping invalid/obsolete roots/pages.  The "handling" in the TDP MMU
      is dead code, as zap_gfn_range() is called with shared=true, in which
      case it will never return true due to the flushing being handled by
      tdp_mmu_zap_spte_atomic().
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20220226001546.360188-6-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7ae5840e
    • S
      KVM: x86/mmu: Formalize TDP MMU's (unintended?) deferred TLB flush logic · db01416b
      Sean Christopherson 提交于
      Explicitly ignore the result of zap_gfn_range() when putting the last
      reference to a TDP MMU root, and add a pile of comments to formalize the
      TDP MMU's behavior of deferring TLB flushes to alloc/reuse.  Note, this
      only affects the !shared case, as zap_gfn_range() subtly never returns
      true for "flush" as the flush is handled by tdp_mmu_zap_spte_atomic().
      
      Putting the root without a flush is ok because even if there are stale
      references to the root in the TLB, they are unreachable because KVM will
      not run the guest with the same ASID without first flushing (where ASID
      in this context refers to both SVM's explicit ASID and Intel's implicit
      ASID that is constructed from VPID+PCID+EPT4A+etc...).
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220226001546.360188-5-seanjc@google.com>
      Reviewed-by: NMingwei Zhang <mizhang@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      db01416b
    • S
      KVM: x86/mmu: Fix wrong/misleading comments in TDP MMU fast zap · f28e9c7f
      Sean Christopherson 提交于
      Fix misleading and arguably wrong comments in the TDP MMU's fast zap
      flow.  The comments, and the fact that actually zapping invalid roots was
      added separately, strongly suggests that zapping invalid roots is an
      optimization and not required for correctness.  That is a lie.
      
      KVM _must_ zap invalid roots before returning from kvm_mmu_zap_all_fast(),
      because when it's called from kvm_mmu_invalidate_zap_pages_in_memslot(),
      KVM is relying on it to fully remove all references to the memslot.  Once
      the memslot is gone, KVM's mmu_notifier hooks will be unable to find the
      stale references as the hva=>gfn translation is done via the memslots.
      If KVM doesn't immediately zap SPTEs and userspace unmaps a range after
      deleting a memslot, KVM will fail to zap in response to the mmu_notifier
      due to not finding a memslot corresponding to the notifier's range, which
      leads to a variation of use-after-free.
      
      The other misleading comment (and code) explicitly states that roots
      without a reference should be skipped.  While that's technically true,
      it's also extremely misleading as it should be impossible for KVM to
      encounter a defunct root on the list while holding mmu_lock for write.
      Opportunistically add a WARN to enforce that invariant.
      
      Fixes: b7cccd39 ("KVM: x86/mmu: Fast invalidation for TDP MMU")
      Fixes: 4c6654bd ("KVM: x86/mmu: Tear down roots before kvm_mmu_zap_all_fast returns")
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20220226001546.360188-4-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f28e9c7f
  3. 02 3月, 2022 1 次提交
  4. 01 3月, 2022 3 次提交
    • S
      KVM: WARN if is_unsync_root() is called on a root without a shadow page · 5d6a3221
      Sean Christopherson 提交于
      WARN and bail if is_unsync_root() is passed a root for which there is no
      shadow page, i.e. is passed the physical address of one of the special
      roots, which do not have an associated shadow page.  The current usage
      squeaks by without bug reports because neither kvm_mmu_sync_roots() nor
      kvm_mmu_sync_prev_roots() calls the helper with pae_root or pml4_root,
      and 5-level AMD CPUs are not generally available, i.e. no one can coerce
      KVM into calling is_unsync_root() on pml5_root.
      
      Note, this doesn't fix the mess with 5-level nNPT, it just (hopefully)
      prevents KVM from crashing.
      
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220225182248.3812651-8-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5d6a3221
    • S
      KVM: x86/mmu: Zap only obsolete roots if a root shadow page is zapped · 527d5cd7
      Sean Christopherson 提交于
      Zap only obsolete roots when responding to zapping a single root shadow
      page.  Because KVM keeps root_count elevated when stuffing a previous
      root into its PGD cache, shadowing a 64-bit guest means that zapping any
      root causes all vCPUs to reload all roots, even if their current root is
      not affected by the zap.
      
      For many kernels, zapping a single root is a frequent operation, e.g. in
      Linux it happens whenever an mm is dropped, e.g. process exits, etc...
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20220225182248.3812651-5-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      527d5cd7
    • S
      KVM: Drop kvm_reload_remote_mmus(), open code request in x86 users · 2f6f66cc
      Sean Christopherson 提交于
      Remove the generic kvm_reload_remote_mmus() and open code its
      functionality into the two x86 callers.  x86 is (obviously) the only
      architecture that uses the hook, and is also the only architecture that
      uses KVM_REQ_MMU_RELOAD in a way that's consistent with the name.  That
      will change in a future patch, as x86's usage when zapping a single
      shadow page x86 doesn't actually _need_ to reload all vCPUs' MMUs, only
      MMUs whose root is being zapped actually need to be reloaded.
      
      s390 also uses KVM_REQ_MMU_RELOAD, but for a slightly different purpose.
      
      Drop the generic code in anticipation of implementing s390 and x86 arch
      specific requests, which will allow dropping KVM_REQ_MMU_RELOAD entirely.
      
      Opportunistically reword the x86 TDP MMU comment to avoid making
      references to functions (and requests!) when possible, and to remove the
      rather ambiguous "this".
      
      No functional change intended.
      
      Cc: Ben Gardon <bgardon@google.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20220225182248.3812651-4-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2f6f66cc
  5. 25 2月, 2022 9 次提交
    • P
      KVM: x86/mmu: clear MMIO cache when unloading the MMU · 6d58f275
      Paolo Bonzini 提交于
      For cleanliness, do not leave a stale GVA in the cache after all the roots are
      cleared.  In practice, kvm_mmu_load will go through kvm_mmu_sync_roots if
      paging is on, and will not use vcpu_match_mmio_gva at all if paging is off.
      However, leaving data in the cache might cause bugs in the future.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6d58f275
    • P
      KVM: x86/mmu: Always use current mmu's role when loading new PGD · d2e5f333
      Paolo Bonzini 提交于
      Since the guest PGD is now loaded after the MMU has been set up
      completely, the desired role for a cache hit is simply the current
      mmu_role.  There is no need to compute it again, so __kvm_mmu_new_pgd
      can be folded in kvm_mmu_new_pgd.
      Reviewed-by: NSean Christopherson <seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d2e5f333
    • P
      KVM: x86/mmu: load new PGD after the shadow MMU is initialized · 3cffc89d
      Paolo Bonzini 提交于
      Now that __kvm_mmu_new_pgd does not look at the MMU's root_level and
      shadow_root_level anymore, pull the PGD load after the initialization of
      the shadow MMUs.
      
      Besides being more intuitive, this enables future simplifications
      and optimizations because it's not necessary anymore to compute the
      role outside kvm_init_mmu.  In particular, kvm_mmu_reset_context was not
      attempting to use a cached PGD to avoid having to figure out the new role.
      With this change, it could follow what nested_{vmx,svm}_load_cr3 are doing,
      and avoid unloading all the cached roots.
      Reviewed-by: NSean Christopherson <seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3cffc89d
    • P
      KVM: x86/mmu: look for a cached PGD when going from 32-bit to 64-bit · 5499ea73
      Paolo Bonzini 提交于
      Right now, PGD caching avoids placing a PAE root in the cache by using the
      old value of mmu->root_level and mmu->shadow_root_level; it does not look
      for a cached PGD if the old root is a PAE one, and then frees it using
      kvm_mmu_free_roots.
      
      Change the logic instead to free the uncacheable root early.
      This way, __kvm_new_mmu_pgd is able to look up the cache when going from
      32-bit to 64-bit (if there is a hit, the invalid root becomes the least
      recently used).  An example of this is nested virtualization with shadow
      paging, when a 64-bit L1 runs a 32-bit L2.
      
      As a side effect (which is actually the reason why this patch was
      written), PGD caching does not use the old value of mmu->root_level
      and mmu->shadow_root_level anymore.
      Reviewed-by: NSean Christopherson <seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5499ea73
    • P
      KVM: x86/mmu: do not pass vcpu to root freeing functions · 0c1c92f1
      Paolo Bonzini 提交于
      These functions only operate on a given MMU, of which there is more
      than one in a vCPU (we care about two, because the third does not have
      any roots and is only used to walk guest page tables).  They do need a
      struct kvm in order to lock the mmu_lock, but they do not needed anything
      else in the struct kvm_vcpu.  So, pass the vcpu->kvm directly to them.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0c1c92f1
    • P
      KVM: x86/mmu: do not consult levels when freeing roots · 594bef79
      Paolo Bonzini 提交于
      Right now, PGD caching requires a complicated dance of first computing
      the MMU role and passing it to __kvm_mmu_new_pgd(), and then separately calling
      kvm_init_mmu().
      
      Part of this is due to kvm_mmu_free_roots using mmu->root_level and
      mmu->shadow_root_level to distinguish whether the page table uses a single
      root or 4 PAE roots.  Because kvm_init_mmu() can overwrite mmu->root_level,
      kvm_mmu_free_roots() must be called before kvm_init_mmu().
      
      However, even after kvm_init_mmu() there is a way to detect whether the
      page table may hold PAE roots, as root.hpa isn't backed by a shadow when
      it points at PAE roots.  Using this method results in simpler code, and
      is one less obstacle in moving all calls to __kvm_mmu_new_pgd() after the
      MMU has been initialized.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      594bef79
    • P
      KVM: x86: use struct kvm_mmu_root_info for mmu->root · b9e5603c
      Paolo Bonzini 提交于
      The root_hpa and root_pgd fields form essentially a struct kvm_mmu_root_info.
      Use the struct to have more consistency between mmu->root and
      mmu->prev_roots.
      
      The patch is entirely search and replace except for cached_root_available,
      which does not need a temporary struct kvm_mmu_root_info anymore.
      Reviewed-by: NSean Christopherson <seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b9e5603c
    • P
      KVM: x86/mmu: avoid NULL-pointer dereference on page freeing bugs · 9191b8f0
      Paolo Bonzini 提交于
      WARN and bail if KVM attempts to free a root that isn't backed by a shadow
      page.  KVM allocates a bare page for "special" roots, e.g. when using PAE
      paging or shadowing 2/3/4-level page tables with 4/5-level, and so root_hpa
      will be valid but won't be backed by a shadow page.  It's all too easy to
      blindly call mmu_free_root_page() on root_hpa, be nice and WARN instead of
      crashing KVM and possibly the kernel.
      Reviewed-by: NSean Christopherson <seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9191b8f0
    • L
      KVM: x86/mmu: make apf token non-zero to fix bug · 6f3c1fc5
      Liang Zhang 提交于
      In current async pagefault logic, when a page is ready, KVM relies on
      kvm_arch_can_dequeue_async_page_present() to determine whether to deliver
      a READY event to the Guest. This function test token value of struct
      kvm_vcpu_pv_apf_data, which must be reset to zero by Guest kernel when a
      READY event is finished by Guest. If value is zero meaning that a READY
      event is done, so the KVM can deliver another.
      But the kvm_arch_setup_async_pf() may produce a valid token with zero
      value, which is confused with previous mention and may lead the loss of
      this READY event.
      
      This bug may cause task blocked forever in Guest:
       INFO: task stress:7532 blocked for more than 1254 seconds.
             Not tainted 5.10.0 #16
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
       task:stress          state:D stack:    0 pid: 7532 ppid:  1409
       flags:0x00000080
       Call Trace:
        __schedule+0x1e7/0x650
        schedule+0x46/0xb0
        kvm_async_pf_task_wait_schedule+0xad/0xe0
        ? exit_to_user_mode_prepare+0x60/0x70
        __kvm_handle_async_pf+0x4f/0xb0
        ? asm_exc_page_fault+0x8/0x30
        exc_page_fault+0x6f/0x110
        ? asm_exc_page_fault+0x8/0x30
        asm_exc_page_fault+0x1e/0x30
       RIP: 0033:0x402d00
       RSP: 002b:00007ffd31912500 EFLAGS: 00010206
       RAX: 0000000000071000 RBX: ffffffffffffffff RCX: 00000000021a32b0
       RDX: 000000000007d011 RSI: 000000000007d000 RDI: 00000000021262b0
       RBP: 00000000021262b0 R08: 0000000000000003 R09: 0000000000000086
       R10: 00000000000000eb R11: 00007fefbdf2baa0 R12: 0000000000000000
       R13: 0000000000000002 R14: 000000000007d000 R15: 0000000000001000
      Signed-off-by: NLiang Zhang <zhangliang5@huawei.com>
      Message-Id: <20220222031239.1076682-1-zhangliang5@huawei.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6f3c1fc5
  6. 19 2月, 2022 1 次提交
  7. 11 2月, 2022 14 次提交
  8. 20 1月, 2022 1 次提交
  9. 20 12月, 2021 1 次提交
    • S
      KVM: x86: Retry page fault if MMU reload is pending and root has no sp · 18c841e1
      Sean Christopherson 提交于
      Play nice with a NULL shadow page when checking for an obsolete root in
      the page fault handler by flagging the page fault as stale if there's no
      shadow page associated with the root and KVM_REQ_MMU_RELOAD is pending.
      Invalidating memslots, which is the only case where _all_ roots need to
      be reloaded, requests all vCPUs to reload their MMUs while holding
      mmu_lock for lock.
      
      The "special" roots, e.g. pae_root when KVM uses PAE paging, are not
      backed by a shadow page.  Running with TDP disabled or with nested NPT
      explodes spectaculary due to dereferencing a NULL shadow page pointer.
      
      Skip the KVM_REQ_MMU_RELOAD check if there is a valid shadow page for the
      root.  Zapping shadow pages in response to guest activity, e.g. when the
      guest frees a PGD, can trigger KVM_REQ_MMU_RELOAD even if the current
      vCPU isn't using the affected root.  I.e. KVM_REQ_MMU_RELOAD can be seen
      with a completely valid root shadow page.  This is a bit of a moot point
      as KVM currently unloads all roots on KVM_REQ_MMU_RELOAD, but that will
      be cleaned up in the future.
      
      Fixes: a955cad8 ("KVM: x86/mmu: Retry page fault if root is invalidated by memslot update")
      Cc: stable@vger.kernel.org
      Cc: Maxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20211209060552.2956723-2-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      18c841e1
  10. 08 12月, 2021 3 次提交