• V
    x86/kvm/hyper-v: avoid spurious pending stimer on vCPU init · 013cc6eb
    Vitaly Kuznetsov 提交于
    When userspace initializes guest vCPUs it may want to zero all supported
    MSRs including Hyper-V related ones including HV_X64_MSR_STIMERn_CONFIG/
    HV_X64_MSR_STIMERn_COUNT. With commit f3b138c5 ("kvm/x86: Update SynIC
    timers on guest entry only") we began doing stimer_mark_pending()
    unconditionally on every config change.
    
    The issue I'm observing manifests itself as following:
    - Qemu writes 0 to STIMERn_{CONFIG,COUNT} MSRs and marks all stimers as
      pending in stimer_pending_bitmap, arms KVM_REQ_HV_STIMER;
    - kvm_hv_has_stimer_pending() starts returning true;
    - kvm_vcpu_has_events() starts returning true;
    - kvm_arch_vcpu_runnable() starts returning true;
    - when kvm_arch_vcpu_ioctl_run() gets into
      (vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) case:
      - kvm_vcpu_block() gets in 'kvm_vcpu_check_block(vcpu) < 0' and returns
        immediately, avoiding normal wait path;
      - -EAGAIN is returned from kvm_arch_vcpu_ioctl_run() immediately forcing
        userspace to retry.
    
    So instead of normal wait path we get a busy loop on all secondary vCPUs
    before they get INIT signal. This seems to be undesirable, especially given
    that this happens even when Hyper-V extensions are not used.
    
    Generally, it seems to be pointless to mark an stimer as pending in
    stimer_pending_bitmap and arm KVM_REQ_HV_STIMER as the only thing
    kvm_hv_process_stimers() will do is clear the corresponding bit. We may
    just not mark disabled timers as pending instead.
    
    Fixes: f3b138c5 ("kvm/x86: Update SynIC timers on guest entry only")
    Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
    Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
    013cc6eb
hyperv.c 49.2 KB