1. 28 9月, 2020 9 次提交
  2. 13 9月, 2020 1 次提交
    • M
      KVM: nSVM: more strict SMM checks when returning to nested guest · 3ebb5d26
      Maxim Levitsky 提交于
      * check that guest is 64 bit guest, otherwise the SVM related fields
        in the smm state area are not defined
      
      * If the SMM area indicates that SMM interrupted a running guest,
        check that EFER.SVME which is also saved in this area is set, otherwise
        the guest might have tampered with SMM save area, and so indicate
        emulation failure which should triple fault the guest.
      
      * Check that that guest CPUID supports SVM (due to the same issue as above)
      Signed-off-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20200827162720.278690-4-mlevitsk@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3ebb5d26
  3. 12 9月, 2020 1 次提交
  4. 08 9月, 2020 1 次提交
  5. 24 8月, 2020 1 次提交
  6. 31 7月, 2020 4 次提交
  7. 11 7月, 2020 4 次提交
    • M
      KVM: x86: Add a capability for GUEST_MAXPHYADDR < HOST_MAXPHYADDR support · 3edd6839
      Mohammed Gamal 提交于
      This patch adds a new capability KVM_CAP_SMALLER_MAXPHYADDR which
      allows userspace to query if the underlying architecture would
      support GUEST_MAXPHYADDR < HOST_MAXPHYADDR and hence act accordingly
      (e.g. qemu can decide if it should warn for -cpu ..,phys-bits=X)
      
      The complications in this patch are due to unexpected (but documented)
      behaviour we see with NPF vmexit handling in AMD processor.  If
      SVM is modified to add guest physical address checks in the NPF
      and guest #PF paths, we see the followning error multiple times in
      the 'access' test in kvm-unit-tests:
      
                  test pte.p pte.36 pde.p: FAIL: pte 2000021 expected 2000001
                  Dump mapping: address: 0x123400000000
                  ------L4: 24c3027
                  ------L3: 24c4027
                  ------L2: 24c5021
                  ------L1: 1002000021
      
      This is because the PTE's accessed bit is set by the CPU hardware before
      the NPF vmexit. This is handled completely by hardware and cannot be fixed
      in software.
      
      Therefore, availability of the new capability depends on a boolean variable
      allow_smaller_maxphyaddr which is set individually by VMX and SVM init
      routines. On VMX it's always set to true, on SVM it's only set to true
      when NPT is not enabled.
      
      CC: Tom Lendacky <thomas.lendacky@amd.com>
      CC: Babu Moger <babu.moger@amd.com>
      Signed-off-by: NMohammed Gamal <mgamal@redhat.com>
      Message-Id: <20200710154811.418214-10-mgamal@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3edd6839
    • P
      KVM: x86: rename update_bp_intercept to update_exception_bitmap · 6986982f
      Paolo Bonzini 提交于
      We would like to introduce a callback to update the #PF intercept
      when CPUID changes.  Just reuse update_bp_intercept since VMX is
      already using update_exception_bitmap instead of a bespoke function.
      
      While at it, remove an unnecessary assignment in the SVM version,
      which is already done in the caller (kvm_arch_vcpu_ioctl_set_guest_debug)
      and has nothing to do with the exception bitmap.
      Reviewed-by: NJim Mattson <jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6986982f
    • V
      KVM: nSVM: prepare to handle errors from enter_svm_guest_mode() · 59cd9bc5
      Vitaly Kuznetsov 提交于
      Some operations in enter_svm_guest_mode() may fail, e.g. currently
      we suppress kvm_set_cr3() return value. Prepare the code to proparate
      errors.
      
      No functional change intended.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200710141157.1640173-5-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      59cd9bc5
    • V
      KVM: x86: move MSR_IA32_PERF_CAPABILITIES emulation to common x86 code · d574c539
      Vitaly Kuznetsov 提交于
      state_test/smm_test selftests are failing on AMD with:
      "Unexpected result from KVM_GET_MSRS, r: 51 (failed MSR was 0x345)"
      
      MSR_IA32_PERF_CAPABILITIES is an emulated MSR on Intel but it is not
      known to AMD code, we can move the emulation to common x86 code. For
      AMD, we basically just allow the host to read and write zero to the MSR.
      
      Fixes: 27461da3 ("KVM: x86/pmu: Support full width counting")
      Suggested-by: NJim Mattson <jmattson@google.com>
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200710152559.1645827-1-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d574c539
  8. 09 7月, 2020 15 次提交
  9. 15 6月, 2020 1 次提交
    • Q
      kvm/svm: disable KCSAN for svm_vcpu_run() · b95273f1
      Qian Cai 提交于
      For some reasons, running a simple qemu-kvm command with KCSAN will
      reset AMD hosts. It turns out svm_vcpu_run() could not be instrumented.
      Disable it for now.
      
       # /usr/libexec/qemu-kvm -name ubuntu-18.04-server-cloudimg -cpu host
      	-smp 2 -m 2G -hda ubuntu-18.04-server-cloudimg.qcow2
      
      === console output ===
      Kernel 5.6.0-next-20200408+ on an x86_64
      
      hp-dl385g10-05 login:
      
      <...host reset...>
      
      HPE ProLiant System BIOS A40 v1.20 (03/09/2018)
      (C) Copyright 1982-2018 Hewlett Packard Enterprise Development LP
      Early system initialization, please wait...
      Signed-off-by: NQian Cai <cai@lca.pw>
      Message-Id: <20200415153709.1559-1-cai@lca.pw>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b95273f1
  10. 11 6月, 2020 1 次提交
  11. 08 6月, 2020 1 次提交
    • P
      KVM: SVM: fix calls to is_intercept · fb7333df
      Paolo Bonzini 提交于
      is_intercept takes an INTERCEPT_* constant, not SVM_EXIT_*; because
      of this, the compiler was removing the body of the conditionals,
      as if is_intercept returned 0.
      
      This unveils a latent bug: when clearing the VINTR intercept,
      int_ctl must also be changed in the L1 VMCB (svm->nested.hsave),
      just like the intercept itself is also changed in the L1 VMCB.
      Otherwise V_IRQ remains set and, due to the VINTR intercept being clear,
      we get a spurious injection of a vector 0 interrupt on the next
      L2->L1 vmexit.
      Reported-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fb7333df
  12. 01 6月, 2020 1 次提交
    • V
      KVM: x86: extend struct kvm_vcpu_pv_apf_data with token info · 68fd66f1
      Vitaly Kuznetsov 提交于
      Currently, APF mechanism relies on the #PF abuse where the token is being
      passed through CR2. If we switch to using interrupts to deliver page-ready
      notifications we need a different way to pass the data. Extent the existing
      'struct kvm_vcpu_pv_apf_data' with token information for page-ready
      notifications.
      
      While on it, rename 'reason' to 'flags'. This doesn't change the semantics
      as we only have reasons '1' and '2' and these can be treated as bit flags
      but KVM_PV_REASON_PAGE_READY is going away with interrupt based delivery
      making 'reason' name misleading.
      
      The newly introduced apf_put_user_ready() temporary puts both flags and
      token information, this will be changed to put token only when we switch
      to interrupt based notifications.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200525144125.143875-3-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      68fd66f1