1. 22 10月, 2021 22 次提交
  2. 19 10月, 2021 6 次提交
    • O
      KVM: x86: Expose TSC offset controls to userspace · 828ca896
      Oliver Upton 提交于
      To date, VMM-directed TSC synchronization and migration has been a bit
      messy. KVM has some baked-in heuristics around TSC writes to infer if
      the VMM is attempting to synchronize. This is problematic, as it depends
      on host userspace writing to the guest's TSC within 1 second of the last
      write.
      
      A much cleaner approach to configuring the guest's views of the TSC is to
      simply migrate the TSC offset for every vCPU. Offsets are idempotent,
      and thus not subject to change depending on when the VMM actually
      reads/writes values from/to KVM. The VMM can then read the TSC once with
      KVM_GET_CLOCK to capture a (realtime, host_tsc) pair at the instant when
      the guest is paused.
      
      Cc: David Matlack <dmatlack@google.com>
      Cc: Sean Christopherson <seanjc@google.com>
      Signed-off-by: NOliver Upton <oupton@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20210916181538.968978-8-oupton@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      828ca896
    • O
      KVM: x86: Refactor tsc synchronization code · 58d4277b
      Oliver Upton 提交于
      Refactor kvm_synchronize_tsc to make a new function that allows callers
      to specify TSC parameters (offset, value, nanoseconds, etc.) explicitly
      for the sake of participating in TSC synchronization.
      Signed-off-by: NOliver Upton <oupton@google.com>
      Message-Id: <20210916181538.968978-7-oupton@google.com>
      [Make sure kvm->arch.cur_tsc_generation and vcpu->arch.this_tsc_generation are
       equal at the end of __kvm_synchronize_tsc, if matched is false. Reported by
       Maxim Levitsky. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      58d4277b
    • P
      kvm: x86: protect masterclock with a seqcount · 869b4421
      Paolo Bonzini 提交于
      Protect the reference point for kvmclock with a seqcount, so that
      kvmclock updates for all vCPUs can proceed in parallel.  Xen runstate
      updates will also run in parallel and not bounce the kvmclock cacheline.
      
      Of the variables that were protected by pvclock_gtod_sync_lock,
      nr_vcpus_matched_tsc is different because it is updated outside
      pvclock_update_vm_gtod_copy and read inside it.  Therefore, we
      need to keep it protected by a spinlock.  In fact it must now
      be a raw spinlock, because pvclock_update_vm_gtod_copy, being the
      write-side of a seqcount, is non-preemptible.  Since we already
      have tsc_write_lock which is a raw spinlock, we can just use
      tsc_write_lock as the lock that protects the write-side of the
      seqcount.
      Co-developed-by: NOliver Upton <oupton@google.com>
      Message-Id: <20210916181538.968978-6-oupton@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      869b4421
    • O
      KVM: x86: Report host tsc and realtime values in KVM_GET_CLOCK · c68dc1b5
      Oliver Upton 提交于
      Handling the migration of TSCs correctly is difficult, in part because
      Linux does not provide userspace with the ability to retrieve a (TSC,
      realtime) clock pair for a single instant in time. In lieu of a more
      convenient facility, KVM can report similar information in the kvm_clock
      structure.
      
      Provide userspace with a host TSC & realtime pair iff the realtime clock
      is based on the TSC. If userspace provides KVM_SET_CLOCK with a valid
      realtime value, advance the KVM clock by the amount of elapsed time. Do
      not step the KVM clock backwards, though, as it is a monotonic
      oscillator.
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NOliver Upton <oupton@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20210916181538.968978-5-oupton@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c68dc1b5
    • P
      KVM: x86: avoid warning with -Wbitwise-instead-of-logical · 3d5e7a28
      Paolo Bonzini 提交于
      This is a new warning in clang top-of-tree (will be clang 14):
      
      In file included from arch/x86/kvm/mmu/mmu.c:27:
      arch/x86/kvm/mmu/spte.h:318:9: error: use of bitwise '|' with boolean operands [-Werror,-Wbitwise-instead-of-logical]
              return __is_bad_mt_xwr(rsvd_check, spte) |
                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                       ||
      arch/x86/kvm/mmu/spte.h:318:9: note: cast one or both operands to int to silence this warning
      
      The code is fine, but change it anyway to shut up this clever clogs
      of a compiler.
      
      Reported-by: torvic9@mailbox.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3d5e7a28
    • P
      KVM: X86: fix lazy allocation of rmaps · fa13843d
      Paolo Bonzini 提交于
      If allocation of rmaps fails, but some of the pointers have already been written,
      those pointers can be cleaned up when the memslot is freed, or even reused later
      for another attempt at allocating the rmaps.  Therefore there is no need to
      WARN, as done for example in memslot_rmap_alloc, but the allocation *must* be
      skipped lest KVM will overwrite the previous pointer and will indeed leak memory.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fa13843d
  3. 18 10月, 2021 1 次提交
  4. 15 10月, 2021 1 次提交
  5. 05 10月, 2021 3 次提交
  6. 04 10月, 2021 7 次提交