1. 24 6月, 2022 19 次提交
  2. 20 6月, 2022 9 次提交
    • S
      KVM: x86/mmu: Shove refcounted page dependency into host_pfn_mapping_level() · 5d49f08c
      Sean Christopherson 提交于
      Move the check that restricts mapping huge pages into the guest to pfns
      that are backed by refcounted 'struct page' memory into the helper that
      actually "requires" a 'struct page', host_pfn_mapping_level().  In
      addition to deduplicating code, moving the check to the helper eliminates
      the subtle requirement that the caller check that the incoming pfn is
      backed by a refcounted struct page, and as an added bonus avoids an extra
      pfn_to_page() lookup.
      
      Note, the is_error_noslot_pfn() check in kvm_mmu_hugepage_adjust() needs
      to stay where it is, as it guards against dereferencing a NULL memslot in
      the kvm_slot_dirty_track_enabled() that follows.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220429010416.2788472-11-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5d49f08c
    • S
      KVM: Rename/refactor kvm_is_reserved_pfn() to kvm_pfn_to_refcounted_page() · b14b2690
      Sean Christopherson 提交于
      Rename and refactor kvm_is_reserved_pfn() to kvm_pfn_to_refcounted_page()
      to better reflect what KVM is actually checking, and to eliminate extra
      pfn_to_page() lookups.  The kvm_release_pfn_*() an kvm_try_get_pfn()
      helpers in particular benefit from "refouncted" nomenclature, as it's not
      all that obvious why KVM needs to get/put refcounts for some PG_reserved
      pages (ZERO_PAGE and ZONE_DEVICE).
      
      Add a comment to call out that the list of exceptions to PG_reserved is
      all but guaranteed to be incomplete.  The list has mostly been compiled
      by people throwing noodles at KVM and finding out they stick a little too
      well, e.g. the ZERO_PAGE's refcount overflowed and ZONE_DEVICE pages
      didn't get freed.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220429010416.2788472-10-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b14b2690
    • S
      KVM: Take a 'struct page', not a pfn in kvm_is_zone_device_page() · 284dc493
      Sean Christopherson 提交于
      Operate on a 'struct page' instead of a pfn when checking if a page is a
      ZONE_DEVICE page, and rename the helper accordingly.  Generally speaking,
      KVM doesn't actually care about ZONE_DEVICE memory, i.e. shouldn't do
      anything special for ZONE_DEVICE memory.  Rather, KVM wants to treat
      ZONE_DEVICE memory like regular memory, and the need to identify
      ZONE_DEVICE memory only arises as an exception to PG_reserved pages. In
      other words, KVM should only ever check for ZONE_DEVICE memory after KVM
      has already verified that there is a struct page associated with the pfn.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220429010416.2788472-9-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      284dc493
    • S
      KVM: x86/mmu: Use common logic for computing the 32/64-bit base PA mask · 70e41c31
      Sean Christopherson 提交于
      Use common logic for computing PT_BASE_ADDR_MASK for 32-bit, 64-bit, and
      EPT paging.  Both PAGE_MASK and the new-common logic are supsersets of
      what is actually needed for 32-bit paging.  PAGE_MASK sets bits 63:12 and
      the former GUEST_PT64_BASE_ADDR_MASK sets bits 51:12, so regardless of
      which value is used, the result will always be bits 31:12.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220614233328.3896033-9-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      70e41c31
    • S
      KVM: x86/mmu: Truncate paging32's PT_BASE_ADDR_MASK to 32 bits · f7384b88
      Sean Christopherson 提交于
      Truncate paging32's PT_BASE_ADDR_MASK to a pt_element_t, i.e. to 32 bits.
      Ignoring PSE huge pages, the mask is only used in conjunction with gPTEs,
      which are 32 bits, and so the address is limited to bits 31:12.
      
      PSE huge pages encoded PA bits 39:32 in PTE bits 20:13, i.e. need custom
      logic to handle their funky encoding regardless of PT_BASE_ADDR_MASK.
      
      Note, PT_LVL_OFFSET_MASK is somewhat confusing in that it computes the
      offset of the _gfn_, not of the gpa, i.e. not having bits 63:32 set in
      PT_BASE_ADDR_MASK is again correct.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220614233328.3896033-8-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f7384b88
    • P
      KVM: x86/mmu: Use common macros to compute 32/64-bit paging masks · f6b8ea6d
      Paolo Bonzini 提交于
      Dedup the code for generating (most of) the per-type PT_* masks in
      paging_tmpl.h.  The relevant macros only vary based on the number of bits
      per level, and that smidge of info is already provided in a common form
      as PT_LEVEL_BITS.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220614233328.3896033-7-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f6b8ea6d
    • S
      KVM: x86/mmu: Use separate namespaces for guest PTEs and shadow PTEs · 2ca3129e
      Sean Christopherson 提交于
      Separate the macros for KVM's shadow PTEs (SPTE) from guest 64-bit PTEs
      (PT64).  SPTE and PT64 are _mostly_ the same, but the few differences are
      quite critical, e.g. *_BASE_ADDR_MASK must differentiate between host and
      guest physical address spaces, and SPTE_PERM_MASK (was PT64_PERM_MASK) is
      very much specific to SPTEs.
      
      Opportunistically (and temporarily) move most guest macros into paging.h
      to clearly associate them with shadow paging, and to ensure that they're
      not used as of this commit.  A future patch will eliminate them entirely.
      
      Sadly, PT32_LEVEL_BITS is left behind in mmu_internal.h because it's
      needed for the quadrant calculation in kvm_mmu_get_page().  The quadrant
      calculation is hot enough (when using shadow paging with 32-bit guests)
      that adding a per-context helper is undesirable, and burying the
      computation in paging_tmpl.h with a forward declaration isn't exactly an
      improvement.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220614233328.3896033-6-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2ca3129e
    • S
      KVM: x86/mmu: Dedup macros for computing various page table masks · 42c88ff8
      Sean Christopherson 提交于
      Provide common helper macros to generate various masks, shifts, etc...
      for 32-bit vs. 64-bit page tables.  Only the inputs differ, the actual
      calculations are identical.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220614233328.3896033-5-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      42c88ff8
    • S
      KVM: x86/mmu: Bury 32-bit PSE paging helpers in paging_tmpl.h · b3fcdb04
      Sean Christopherson 提交于
      Move a handful of one-off macros and helpers for 32-bit PSE paging into
      paging_tmpl.h and hide them behind "PTTYPE == 32".  Under no circumstance
      should anything but 32-bit shadow paging care about PSE paging.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220614233328.3896033-4-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b3fcdb04
  3. 15 6月, 2022 5 次提交
  4. 09 6月, 2022 1 次提交
    • Y
      KVM: x86/mmu: Set memory encryption "value", not "mask", in shadow PDPTRs · d2263de1
      Yuan Yao 提交于
      Assign shadow_me_value, not shadow_me_mask, to PAE root entries,
      a.k.a. shadow PDPTRs, when host memory encryption is supported.  The
      "mask" is the set of all possible memory encryption bits, e.g. MKTME
      KeyIDs, whereas "value" holds the actual value that needs to be
      stuffed into host page tables.
      
      Using shadow_me_mask results in a failed VM-Entry due to setting
      reserved PA bits in the PDPTRs, and ultimately causes an OOPS due to
      physical addresses with non-zero MKTME bits sending to_shadow_page()
      into the weeds:
      
      set kvm_intel.dump_invalid_vmcs=1 to dump internal KVM state.
      BUG: unable to handle page fault for address: ffd43f00063049e8
      PGD 86dfd8067 P4D 0
      Oops: 0000 [#1] PREEMPT SMP
      RIP: 0010:mmu_free_root_page+0x3c/0x90 [kvm]
       kvm_mmu_free_roots+0xd1/0x200 [kvm]
       __kvm_mmu_unload+0x29/0x70 [kvm]
       kvm_mmu_unload+0x13/0x20 [kvm]
       kvm_arch_destroy_vm+0x8a/0x190 [kvm]
       kvm_put_kvm+0x197/0x2d0 [kvm]
       kvm_vm_release+0x21/0x30 [kvm]
       __fput+0x8e/0x260
       ____fput+0xe/0x10
       task_work_run+0x6f/0xb0
       do_exit+0x327/0xa90
       do_group_exit+0x35/0xa0
       get_signal+0x911/0x930
       arch_do_signal_or_restart+0x37/0x720
       exit_to_user_mode_prepare+0xb2/0x140
       syscall_exit_to_user_mode+0x16/0x30
       do_syscall_64+0x4e/0x90
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      Fixes: e54f1ff2 ("KVM: x86/mmu: Add shadow_me_value and repurpose shadow_me_mask")
      Signed-off-by: NYuan Yao <yuan.yao@intel.com>
      Reviewed-by: NKai Huang <kai.huang@intel.com>
      Message-Id: <20220608012015.19566-1-yuan.yao@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d2263de1
  5. 08 6月, 2022 2 次提交
    • S
      KVM: x86/mmu: Comment FNAME(sync_page) to document TLB flushing logic · b8b9156e
      Sean Christopherson 提交于
      Add a comment to FNAME(sync_page) to explain why the TLB flushing logic
      conspiculously doesn't handle the scenario of guest protections being
      reduced.  Specifically, if synchronizing a SPTE drops execute protections,
      KVM will not emit a TLB flush, whereas dropping writable or clearing A/D
      bits does trigger a flush via mmu_spte_update().  Architecturally, until
      the GPTE is implicitly or explicitly flushed from the guest's perspective,
      KVM is not required to flush any old, stale translations.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Message-Id: <20220513195000.99371-3-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b8b9156e
    • S
      KVM: x86/mmu: Drop RWX=0 SPTEs during ept_sync_page() · 9fb35657
      Sean Christopherson 提交于
      All of sync_page()'s existing checks filter out only !PRESENT gPTE,
      because without execute-only, all upper levels are guaranteed to be at
      least READABLE.  However, if EPT with execute-only support is in use by
      L1, KVM can create an SPTE that is shadow-present but guest-inaccessible
      (RWX=0) if the upper level combined permissions are R (or RW) and
      the leaf EPTE is changed from R (or RW) to X.  Because the EPTE is
      considered present when viewed in isolation, and no reserved bits are set,
      FNAME(prefetch_invalid_gpte) will consider the GPTE valid, and cause a
      not-present SPTE to be created.
      
      The SPTE is "correct": the guest translation is inaccessible because
      the combined protections of all levels yield RWX=0, and KVM will just
      redirect any vmexits to the guest.  If EPT A/D bits are disabled, KVM
      can mistake the SPTE for an access-tracked SPTE, but again such confusion
      isn't fatal, as the "saved" protections are also RWX=0.  However,
      creating a useless SPTE in general means that KVM messed up something,
      even if this particular goof didn't manifest as a functional bug.
      So, drop SPTEs whose new protections will yield a RWX=0 SPTE, and
      add a WARN in make_spte() to detect creation of SPTEs that will
      result in RWX=0 protections.
      
      Fixes: d95c5568 ("kvm: mmu: track read permission explicitly for shadow EPT page tables")
      Cc: David Matlack <dmatlack@google.com>
      Cc: Ben Gardon <bgardon@google.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220513195000.99371-2-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9fb35657
  6. 07 6月, 2022 2 次提交
  7. 21 5月, 2022 1 次提交
    • P
      KVM: x86/mmu: fix NULL pointer dereference on guest INVPCID · 9f46c187
      Paolo Bonzini 提交于
      With shadow paging enabled, the INVPCID instruction results in a call
      to kvm_mmu_invpcid_gva.  If INVPCID is executed with CR0.PG=0, the
      invlpg callback is not set and the result is a NULL pointer dereference.
      Fix it trivially by checking for mmu->invlpg before every call.
      
      There are other possibilities:
      
      - check for CR0.PG, because KVM (like all Intel processors after P5)
        flushes guest TLB on CR0.PG changes so that INVPCID/INVLPG are a
        nop with paging disabled
      
      - check for EFER.LMA, because KVM syncs and flushes when switching
        MMU contexts outside of 64-bit mode
      
      All of these are tricky, go for the simple solution.  This is CVE-2022-1789.
      Reported-by: NYongkang Jia <kangel@zju.edu.cn>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9f46c187
  8. 12 5月, 2022 1 次提交
    • S
      KVM: x86/mmu: Update number of zapped pages even if page list is stable · b28cb0cd
      Sean Christopherson 提交于
      When zapping obsolete pages, update the running count of zapped pages
      regardless of whether or not the list has become unstable due to zapping
      a shadow page with its own child shadow pages.  If the VM is backed by
      mostly 4kb pages, KVM can zap an absurd number of SPTEs without bumping
      the batch count and thus without yielding.  In the worst case scenario,
      this can cause a soft lokcup.
      
       watchdog: BUG: soft lockup - CPU#12 stuck for 22s! [dirty_log_perf_:13020]
         RIP: 0010:workingset_activation+0x19/0x130
         mark_page_accessed+0x266/0x2e0
         kvm_set_pfn_accessed+0x31/0x40
         mmu_spte_clear_track_bits+0x136/0x1c0
         drop_spte+0x1a/0xc0
         mmu_page_zap_pte+0xef/0x120
         __kvm_mmu_prepare_zap_page+0x205/0x5e0
         kvm_mmu_zap_all_fast+0xd7/0x190
         kvm_mmu_invalidate_zap_pages_in_memslot+0xe/0x10
         kvm_page_track_flush_slot+0x5c/0x80
         kvm_arch_flush_shadow_memslot+0xe/0x10
         kvm_set_memslot+0x1a8/0x5d0
         __kvm_set_memory_region+0x337/0x590
         kvm_vm_ioctl+0xb08/0x1040
      
      Fixes: fbb158cb ("KVM: x86/mmu: Revert "Revert "KVM: MMU: zap pages in batch""")
      Reported-by: NDavid Matlack <dmatlack@google.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220511145122.3133334-1-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b28cb0cd