1. 30 9月, 2020 4 次提交
  2. 21 8月, 2020 1 次提交
  3. 10 7月, 2020 2 次提交
  4. 07 7月, 2020 2 次提交
  5. 06 7月, 2020 4 次提交
  6. 09 6月, 2020 3 次提交
    • M
      KVM: arm64: Remove host_cpu_context member from vcpu structure · 07da1ffa
      Marc Zyngier 提交于
      For very long, we have kept this pointer back to the per-cpu
      host state, despite having working per-cpu accessors at EL2
      for some time now.
      
      Recent investigations have shown that this pointer is easy
      to abuse in preemptible context, which is a sure sign that
      it would better be gone. Not to mention that a per-cpu
      pointer is faster to access at all times.
      Reported-by: NAndrew Scull <ascull@google.com>
      Acked-by: Mark Rutland <mark.rutland@arm.com
      Reviewed-by: NAndrew Scull <ascull@google.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      07da1ffa
    • M
      KVM: arm64: Handle PtrAuth traps early · 29eb5a3c
      Marc Zyngier 提交于
      The current way we deal with PtrAuth is a bit heavy handed:
      
      - We forcefully save the host's keys on each vcpu_load()
      - Handling the PtrAuth trap forces us to go all the way back
        to the exit handling code to just set the HCR bits
      
      Overall, this is pretty cumbersome. A better approach would be
      to handle it the same way we deal with the FPSIMD registers:
      
      - On vcpu_load() disable PtrAuth for the guest
      - On first use, save the host's keys, enable PtrAuth in the
        guest
      
      Crucially, this can happen as a fixup, which is done very early
      on exit. We can then reenter the guest immediately without
      leaving the hypervisor role.
      
      Another thing is that it simplify the rest of the host handling:
      exiting all the way to the host means that the only possible
      outcome for this trap is to inject an UNDEF.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      29eb5a3c
    • M
      KVM: arm64: Save the host's PtrAuth keys in non-preemptible context · ef3e40a7
      Marc Zyngier 提交于
      When using the PtrAuth feature in a guest, we need to save the host's
      keys before allowing the guest to program them. For that, we dump
      them in a per-CPU data structure (the so called host context).
      
      But both call sites that do this are in preemptible context,
      which may end up in disaster should the vcpu thread get preempted
      before reentering the guest.
      
      Instead, save the keys eagerly on each vcpu_load(). This has an
      increased overhead, but is at least safe.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      ef3e40a7
  7. 04 6月, 2020 1 次提交
    • P
      KVM: let kvm_destroy_vm_debugfs clean up vCPU debugfs directories · d56f5136
      Paolo Bonzini 提交于
      After commit 63d04348 ("KVM: x86: move kvm_create_vcpu_debugfs after
      last failure point") we are creating the pre-vCPU debugfs files
      after the creation of the vCPU file descriptor.  This makes it
      possible for userspace to reach kvm_vcpu_release before
      kvm_create_vcpu_debugfs has finished.  The vcpu->debugfs_dentry
      then does not have any associated inode anymore, and this causes
      a NULL-pointer dereference in debugfs_create_file.
      
      The solution is simply to avoid removing the files; they are
      cleaned up when the VM file descriptor is closed (and that must be
      after KVM_CREATE_VCPU returns).  We can stop storing the dentry
      in struct kvm_vcpu too, because it is not needed anywhere after
      kvm_create_vcpu_debugfs returns.
      
      Reported-by: syzbot+705f4401d5a93a59b87d@syzkaller.appspotmail.com
      Fixes: 63d04348 ("KVM: x86: move kvm_create_vcpu_debugfs after last failure point")
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d56f5136
  8. 31 5月, 2020 1 次提交
  9. 25 5月, 2020 1 次提交
  10. 16 5月, 2020 4 次提交
  11. 14 5月, 2020 1 次提交
    • D
      kvm: Replace vcpu->swait with rcuwait · da4ad88c
      Davidlohr Bueso 提交于
      The use of any sort of waitqueue (simple or regular) for
      wait/waking vcpus has always been an overkill and semantically
      wrong. Because this is per-vcpu (which is blocked) there is
      only ever a single waiting vcpu, thus no need for any sort of
      queue.
      
      As such, make use of the rcuwait primitive, with the following
      considerations:
      
        - rcuwait already provides the proper barriers that serialize
        concurrent waiter and waker.
      
        - Task wakeup is done in rcu read critical region, with a
        stable task pointer.
      
        - Because there is no concurrency among waiters, we need
        not worry about rcuwait_wait_event() calls corrupting
        the wait->task. As a consequence, this saves the locking
        done in swait when modifying the queue. This also applies
        to per-vcore wait for powerpc kvm-hv.
      
      The x86 tscdeadline_latency test mentioned in 8577370f
      ("KVM: Use simple waitqueue for vcpu->wq") shows that, on avg,
      latency is reduced by around 15-20% with this change.
      
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: kvmarm@lists.cs.columbia.edu
      Cc: linux-mips@vger.kernel.org
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Message-Id: <20200424054837.5138-6-dave@stgolabs.net>
      [Avoid extra logic changes. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      da4ad88c
  12. 21 4月, 2020 1 次提交
  13. 31 3月, 2020 1 次提交
  14. 24 3月, 2020 1 次提交
  15. 17 3月, 2020 1 次提交
    • S
      KVM: Provide common implementation for generic dirty log functions · 0dff0846
      Sean Christopherson 提交于
      Move the implementations of KVM_GET_DIRTY_LOG and KVM_CLEAR_DIRTY_LOG
      for CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT into common KVM code.
      The arch specific implemenations are extremely similar, differing
      only in whether the dirty log needs to be sync'd from hardware (x86)
      and how the TLBs are flushed.  Add new arch hooks to handle sync
      and TLB flush; the sync will also be used for non-generic dirty log
      support in a future patch (s390).
      
      The ulterior motive for providing a common implementation is to
      eliminate the dependency between arch and common code with respect to
      the memslot referenced by the dirty log, i.e. to make it obvious in the
      code that the validity of the memslot is guaranteed, as a future patch
      will rework memslot handling such that id_to_memslot() can return NULL.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0dff0846
  16. 17 2月, 2020 1 次提交
  17. 28 1月, 2020 5 次提交
  18. 24 1月, 2020 3 次提交
  19. 20 1月, 2020 2 次提交
  20. 06 12月, 2019 1 次提交