1. 15 11月, 2020 32 次提交
    • A
      KVM: selftests: Also build dirty_log_perf_test on AArch64 · 87c5f35e
      Andrew Jones 提交于
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <20201111122636.73346-10-drjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      87c5f35e
    • A
      KVM: selftests: Introduce vm_create_[default_]_with_vcpus · 0aa9ec45
      Andrew Jones 提交于
      Introduce new vm_create variants that also takes a number of vcpus,
      an amount of per-vcpu pages, and optionally a list of vcpuids. These
      variants will create default VMs with enough additional pages to
      cover the vcpu stacks, per-vcpu pages, and pagetable pages for all.
      The new 'default' variant uses VM_MODE_DEFAULT, whereas the other
      new variant accepts the mode as a parameter.
      Reviewed-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <20201111122636.73346-6-drjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0aa9ec45
    • A
      KVM: selftests: Make vm_create_default common · ec2f18bb
      Andrew Jones 提交于
      The code is almost 100% the same anyway. Just move it to common
      and add a few arch-specific macros.
      Reviewed-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <20201111122636.73346-5-drjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ec2f18bb
    • P
      KVM: selftests: always use manual clear in dirty_log_perf_test · f63f0b68
      Paolo Bonzini 提交于
      Nothing sets USE_CLEAR_DIRTY_LOG anymore, so anything it surrounds
      is dead code.
      
      However, it is the recommended way to use the dirty page bitmap
      for new enough kernel, so use it whenever KVM has the
      KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 capability.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f63f0b68
    • J
      kvm: x86: Sink cpuid update into vendor-specific set_cr4 functions · 2259c17f
      Jim Mattson 提交于
      On emulated VM-entry and VM-exit, update the CPUID bits that reflect
      CR4.OSXSAVE and CR4.PKE.
      
      This fixes a bug where the CPUID bits could continue to reflect L2 CR4
      values after emulated VM-exit to L1. It also fixes a related bug where
      the CPUID bits could continue to reflect L1 CR4 values after emulated
      VM-entry to L2. The latter bug is mainly relevant to SVM, wherein
      CPUID is not a required intercept. However, it could also be relevant
      to VMX, because the code to conditionally update these CPUID bits
      assumes that the guest CPUID and the guest CR4 are always in sync.
      
      Fixes: 8eb3f87d ("KVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exit")
      Fixes: 2acf923e ("KVM: VMX: Enable XSAVE/XRSTOR for guest")
      Fixes: b9baba86 ("KVM, pkeys: expose CPUID/CR4 to guest")
      Reported-by: NAbhiroop Dabral <adabral@paloaltonetworks.com>
      Signed-off-by: NJim Mattson <jmattson@google.com>
      Reviewed-by: NRicardo Koller <ricarkol@google.com>
      Reviewed-by: NPeter Shier <pshier@google.com>
      Cc: Haozhong Zhang <haozhong.zhang@intel.com>
      Cc: Dexuan Cui <dexuan.cui@intel.com>
      Cc: Huaitong Han <huaitong.han@intel.com>
      Message-Id: <20201029170648.483210-1-jmattson@google.com>
      2259c17f
    • P
      selftests: kvm: keep .gitignore add to date · 8aa426e8
      Paolo Bonzini 提交于
      Add tsc_msrs_test, remove clear_dirty_log_test and alphabetize
      everything.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8aa426e8
    • P
      KVM: selftests: Add "-c" parameter to dirty log test · edd3de6f
      Peter Xu 提交于
      It's only used to override the existing dirty ring size/count.  If
      with a bigger ring count, we test async of dirty ring.  If with a
      smaller ring count, we test ring full code path.  Async is default.
      
      It has no use for non-dirty-ring tests.
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012241.6208-1-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      edd3de6f
    • P
      KVM: selftests: Run dirty ring test asynchronously · 019d321a
      Peter Xu 提交于
      Previously the dirty ring test was working in synchronous way, because
      only with a vmexit (with that it was the ring full event) we'll know
      the hardware dirty bits will be flushed to the dirty ring.
      
      With this patch we first introduce a vcpu kick mechanism using SIGUSR1,
      which guarantees a vmexit and also therefore the flushing of hardware
      dirty bits.  Once this is in place, we can keep the vcpu dirty work
      asynchronous of the whole collection procedure now.  Still, we need
      to be very careful that when reaching the ring buffer soft limit
      (KVM_EXIT_DIRTY_RING_FULL) we must collect the dirty bits before
      continuing the vcpu.
      
      Further increase the dirty ring size to current maximum to make sure
      we torture more on the no-ring-full case, which should be the major
      scenario when the hypervisors like QEMU would like to use this feature.
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012239.6159-1-peterx@redhat.com>
      [Use KVM_SET_SIGNAL_MASK+sigwait instead of a signal handler. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      019d321a
    • P
      KVM: selftests: Add dirty ring buffer test · 84292e56
      Peter Xu 提交于
      Add the initial dirty ring buffer test.
      
      The current test implements the userspace dirty ring collection, by
      only reaping the dirty ring when the ring is full.
      
      So it's still running synchronously like this:
      
                  vcpu                             main thread
      
        1. vcpu dirties pages
        2. vcpu gets dirty ring full
           (userspace exit)
      
                                             3. main thread waits until full
                                                (so hardware buffers flushed)
                                             4. main thread collects
                                             5. main thread continues vcpu
      
        6. vcpu continues, goes back to 1
      
      We can't directly collects dirty bits during vcpu execution because
      otherwise we can't guarantee the hardware dirty bits were flushed when
      we collect and we're very strict on the dirty bits so otherwise we can
      fail the future verify procedure.  A follow up patch will make this
      test to support async just like the existing dirty log test, by adding
      a vcpu kick mechanism.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012237.6111-1-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      84292e56
    • P
      KVM: selftests: Introduce after_vcpu_run hook for dirty log test · 60f644fb
      Peter Xu 提交于
      Provide a hook for the checks after vcpu_run() completes.  Preparation
      for the dirty ring test because we'll need to take care of another
      exit reason.
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012235.6063-1-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      60f644fb
    • P
      KVM: Don't allocate dirty bitmap if dirty ring is enabled · 044c59c4
      Peter Xu 提交于
      Because kvm dirty rings and kvm dirty log is used in an exclusive way,
      Let's avoid creating the dirty_bitmap when kvm dirty ring is enabled.
      At the meantime, since the dirty_bitmap will be conditionally created
      now, we can't use it as a sign of "whether this memory slot enabled
      dirty tracking".  Change users like that to check against the kvm
      memory slot flags.
      
      Note that there still can be chances where the kvm memory slot got its
      dirty_bitmap allocated, _if_ the memory slots are created before
      enabling of the dirty rings and at the same time with the dirty
      tracking capability enabled, they'll still with the dirty_bitmap.
      However it should not hurt much (e.g., the bitmaps will always be
      freed if they are there), and the real users normally won't trigger
      this because dirty bit tracking flag should in most cases only be
      applied to kvm slots only before migration starts, that should be far
      latter than kvm initializes (VM starts).
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012226.5868-1-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      044c59c4
    • P
      KVM: Make dirty ring exclusive to dirty bitmap log · b2cc64c4
      Peter Xu 提交于
      There's no good reason to use both the dirty bitmap logging and the
      new dirty ring buffer to track dirty bits.  We should be able to even
      support both of them at the same time, but it could complicate things
      which could actually help little.  Let's simply make it the rule
      before we enable dirty ring on any arch, that we don't allow these two
      interfaces to be used together.
      
      The big world switch would be KVM_CAP_DIRTY_LOG_RING capability
      enablement.  That's where we'll switch from the default dirty logging
      way to the dirty ring way.  As long as kvm->dirty_ring_size is setup
      correctly, we'll once and for all switch to the dirty ring buffer mode
      for the current virtual machine.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012224.5818-1-peterx@redhat.com>
      [Change errno from EINVAL to ENXIO. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b2cc64c4
    • P
      KVM: X86: Implement ring-based dirty memory tracking · fb04a1ed
      Peter Xu 提交于
      This patch is heavily based on previous work from Lei Cao
      <lei.cao@stratus.com> and Paolo Bonzini <pbonzini@redhat.com>. [1]
      
      KVM currently uses large bitmaps to track dirty memory.  These bitmaps
      are copied to userspace when userspace queries KVM for its dirty page
      information.  The use of bitmaps is mostly sufficient for live
      migration, as large parts of memory are be dirtied from one log-dirty
      pass to another.  However, in a checkpointing system, the number of
      dirty pages is small and in fact it is often bounded---the VM is
      paused when it has dirtied a pre-defined number of pages. Traversing a
      large, sparsely populated bitmap to find set bits is time-consuming,
      as is copying the bitmap to user-space.
      
      A similar issue will be there for live migration when the guest memory
      is huge while the page dirty procedure is trivial.  In that case for
      each dirty sync we need to pull the whole dirty bitmap to userspace
      and analyse every bit even if it's mostly zeros.
      
      The preferred data structure for above scenarios is a dense list of
      guest frame numbers (GFN).  This patch series stores the dirty list in
      kernel memory that can be memory mapped into userspace to allow speedy
      harvesting.
      
      This patch enables dirty ring for X86 only.  However it should be
      easily extended to other archs as well.
      
      [1] https://patchwork.kernel.org/patch/10471409/Signed-off-by: NLei Cao <lei.cao@stratus.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012222.5767-1-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fb04a1ed
    • P
      KVM: Pass in kvm pointer into mark_page_dirty_in_slot() · 28bd726a
      Peter Xu 提交于
      The context will be needed to implement the kvm dirty ring.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012044.5151-5-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      28bd726a
    • P
      KVM: remove kvm_clear_guest_page · 2f541442
      Paolo Bonzini 提交于
      kvm_clear_guest_page is not used anymore after "KVM: X86: Don't track dirty
      for KVM_SET_[TSS_ADDR|IDENTITY_MAP_ADDR]", except from kvm_clear_guest.
      We can just inline it in its sole user.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2f541442
    • P
      KVM: X86: Don't track dirty for KVM_SET_[TSS_ADDR|IDENTITY_MAP_ADDR] · ff5a983c
      Peter Xu 提交于
      Originally, we have three code paths that can dirty a page without
      vcpu context for X86:
      
        - init_rmode_identity_map
        - init_rmode_tss
        - kvmgt_rw_gpa
      
      init_rmode_identity_map and init_rmode_tss will be setup on
      destination VM no matter what (and the guest cannot even see them), so
      it does not make sense to track them at all.
      
      To do this, allow __x86_set_memory_region() to return the userspace
      address that just allocated to the caller.  Then in both of the
      functions we directly write to the userspace address instead of
      calling kvm_write_*() APIs.
      
      Another trivial change is that we don't need to explicitly clear the
      identity page table root in init_rmode_identity_map() because no
      matter what we'll write to the whole page with 4M huge page entries.
      Suggested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20201001012044.5151-4-peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ff5a983c
    • V
      KVM: selftests: test KVM_GET_SUPPORTED_HV_CPUID as a system ioctl · 8b460692
      Vitaly Kuznetsov 提交于
      KVM_GET_SUPPORTED_HV_CPUID is now supported as both vCPU and VM ioctl,
      test that.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200929150944.1235688-3-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8b460692
    • V
      KVM: x86: hyper-v: allow KVM_GET_SUPPORTED_HV_CPUID as a system ioctl · c21d54f0
      Vitaly Kuznetsov 提交于
      KVM_GET_SUPPORTED_HV_CPUID is a vCPU ioctl but its output is now
      independent from vCPU and in some cases VMMs may want to use it as a system
      ioctl instead. In particular, QEMU doesn CPU feature expansion before any
      vCPU gets created so KVM_GET_SUPPORTED_HV_CPUID can't be used.
      
      Convert KVM_GET_SUPPORTED_HV_CPUID to 'dual' system/vCPU ioctl with the
      same meaning.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20200929150944.1235688-2-vkuznets@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c21d54f0
    • D
      kvm/eventfd: Drain events from eventfd in irqfd_wakeup() · b59e00dd
      David Woodhouse 提交于
      Don't allow the events to accumulate in the eventfd counter, drain them
      as they are handled.
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Message-Id: <20201027135523.646811-4-dwmw2@infradead.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b59e00dd
    • D
      vfio/virqfd: Drain events from eventfd in virqfd_wakeup() · b1b397ae
      David Woodhouse 提交于
      Don't allow the events to accumulate in the eventfd counter, drain them
      as they are handled.
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Message-Id: <20201027135523.646811-3-dwmw2@infradead.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NAlex Williamson <alex.williamson@redhat.com>
      b1b397ae
    • D
      eventfd: Export eventfd_ctx_do_read() · 28f13267
      David Woodhouse 提交于
      Where events are consumed in the kernel, for example by KVM's
      irqfd_wakeup() and VFIO's virqfd_wakeup(), they currently lack a
      mechanism to drain the eventfd's counter.
      
      Since the wait queue is already locked while the wakeup functions are
      invoked, all they really need to do is call eventfd_ctx_do_read().
      
      Add a check for the lock, and export it for them.
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Message-Id: <20201027135523.646811-2-dwmw2@infradead.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      28f13267
    • D
      kvm/eventfd: Use priority waitqueue to catch events before userspace · e8dbf195
      David Woodhouse 提交于
      As far as I can tell, when we use posted interrupts we silently cut off
      the events from userspace, if it's listening on the same eventfd that
      feeds the irqfd.
      
      I like that behaviour. Let's do it all the time, even without posted
      interrupts. It makes it much easier to handle IRQ remapping invalidation
      without having to constantly add/remove the fd from the userspace poll
      set. We can just leave userspace polling on it, and the bypass will...
      well... bypass it.
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Message-Id: <20201026175325.585623-2-dwmw2@infradead.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e8dbf195
    • D
      sched/wait: Add add_wait_queue_priority() · c4d51a52
      David Woodhouse 提交于
      This allows an exclusive wait_queue_entry to be added at the head of the
      queue, instead of the tail as normal. Thus, it gets to consume events
      first without allowing non-exclusive waiters to be woken at all.
      
      The (first) intended use is for KVM IRQFD, which currently has
      inconsistent behaviour depending on whether posted interrupts are
      available or not. If they are, KVM will bypass the eventfd completely
      and deliver interrupts directly to the appropriate vCPU. If not, events
      are delivered through the eventfd and userspace will receive them when
      polling on the eventfd.
      
      By using add_wait_queue_priority(), KVM will be able to consistently
      consume events within the kernel without accidentally exposing them
      to userspace when they're supposed to be bypassed. This, in turn, means
      that userspace doesn't have to jump through hoops to avoid listening
      on the erroneously noisy eventfd and injecting duplicate interrupts.
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Message-Id: <20201027143944.648769-2-dwmw2@infradead.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c4d51a52
    • Y
      KVM: x86: emulate wait-for-SIPI and SIPI-VMExit · bf0cd88c
      Yadong Qi 提交于
      Background: We have a lightweight HV, it needs INIT-VMExit and
      SIPI-VMExit to wake-up APs for guests since it do not monitor
      the Local APIC. But currently virtual wait-for-SIPI(WFS) state
      is not supported in nVMX, so when running on top of KVM, the L1
      HV cannot receive the INIT-VMExit and SIPI-VMExit which cause
      the L2 guest cannot wake up the APs.
      
      According to Intel SDM Chapter 25.2 Other Causes of VM Exits,
      SIPIs cause VM exits when a logical processor is in
      wait-for-SIPI state.
      
      In this patch:
          1. introduce SIPI exit reason,
          2. introduce wait-for-SIPI state for nVMX,
          3. advertise wait-for-SIPI support to guest.
      
      When L1 hypervisor is not monitoring Local APIC, L0 need to emulate
      INIT-VMExit and SIPI-VMExit to L1 to emulate INIT-SIPI-SIPI for
      L2. L2 LAPIC write would be traped by L0 Hypervisor(KVM), L0 should
      emulate the INIT/SIPI vmexit to L1 hypervisor to set proper state
      for L2's vcpu state.
      
      Handle procdure:
      Source vCPU:
          L2 write LAPIC.ICR(INIT).
          L0 trap LAPIC.ICR write(INIT): inject a latched INIT event to target
             vCPU.
      Target vCPU:
          L0 emulate an INIT VMExit to L1 if is guest mode.
          L1 set guest VMCS, guest_activity_state=WAIT_SIPI, vmresume.
          L0 set vcpu.mp_state to INIT_RECEIVED if (vmcs12.guest_activity_state
             == WAIT_SIPI).
      
      Source vCPU:
          L2 write LAPIC.ICR(SIPI).
          L0 trap LAPIC.ICR write(INIT): inject a latched SIPI event to traget
             vCPU.
      Target vCPU:
          L0 emulate an SIPI VMExit to L1 if (vcpu.mp_state == INIT_RECEIVED).
          L1 set CS:IP, guest_activity_state=ACTIVE, vmresume.
          L0 resume to L2.
          L2 start-up.
      Signed-off-by: NYadong Qi <yadong.qi@intel.com>
      Message-Id: <20200922052343.84388-1-yadong.qi@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20201106065122.403183-1-yadong.qi@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      bf0cd88c
    • P
      KVM: x86: fix apic_accept_events vs check_nested_events · 1c96dcce
      Paolo Bonzini 提交于
      vmx_apic_init_signal_blocked is buggy in that it returns true
      even in VMX non-root mode.  In non-root mode, however, INITs
      are not latched, they just cause a vmexit.  Previously,
      KVM was waiting for them to be processed when kvm_apic_accept_events
      and in the meanwhile it ate the SIPIs that the processor received.
      
      However, in order to implement the wait-for-SIPI activity state,
      KVM will have to process KVM_APIC_SIPI in vmx_check_nested_events,
      and it will not be possible anymore to disregard SIPIs in non-root
      mode as the code is currently doing.
      
      By calling kvm_x86_ops.nested_ops->check_events, we can force a vmexit
      (with the side-effect of latching INITs) before incorrectly injecting
      an INIT or SIPI in a guest, and therefore vmx_apic_init_signal_blocked
      can do the right thing.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1c96dcce
    • S
      KVM: selftests: Verify supported CR4 bits can be set before KVM_SET_CPUID2 · 7a873e45
      Sean Christopherson 提交于
      Extend the KVM_SET_SREGS test to verify that all supported CR4 bits, as
      enumerated by KVM, can be set before KVM_SET_CPUID2, i.e. without first
      defining the vCPU model.  KVM is supposed to skip guest CPUID checks
      when host userspace is stuffing guest state.
      
      Check the inverse as well, i.e. that KVM rejects KVM_SET_REGS if CR4
      has one or more unsupported bits set.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-7-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7a873e45
    • S
      KVM: x86: Return bool instead of int for CR4 and SREGS validity checks · ee69c92b
      Sean Christopherson 提交于
      Rework the common CR4 and SREGS checks to return a bool instead of an
      int, i.e. true/false instead of 0/-EINVAL, and add "is" to the name to
      clarify the polarity of the return value (which is effectively inverted
      by this change).
      
      No functional changed intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-6-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ee69c92b
    • S
      KVM: x86: Move vendor CR4 validity check to dedicated kvm_x86_ops hook · c2fe3cd4
      Sean Christopherson 提交于
      Split out VMX's checks on CR4.VMXE to a dedicated hook, .is_valid_cr4(),
      and invoke the new hook from kvm_valid_cr4().  This fixes an issue where
      KVM_SET_SREGS would return success while failing to actually set CR4.
      
      Fixing the issue by explicitly checking kvm_x86_ops.set_cr4()'s return
      in __set_sregs() is not a viable option as KVM has already stuffed a
      variety of vCPU state.
      
      Note, kvm_valid_cr4() and is_valid_cr4() have different return types and
      inverted semantics.  This will be remedied in a future patch.
      
      Fixes: 5e1746d6 ("KVM: nVMX: Allow setting the VMXE bit in CR4")
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-5-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c2fe3cd4
    • S
      KVM: SVM: Drop VMXE check from svm_set_cr4() · 311a0659
      Sean Christopherson 提交于
      Drop svm_set_cr4()'s explicit check CR4.VMXE now that common x86 handles
      the check by incorporating VMXE into the CR4 reserved bits, via
      kvm_cpu_caps.  SVM obviously does not set X86_FEATURE_VMX.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-4-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      311a0659
    • S
      KVM: VMX: Drop explicit 'nested' check from vmx_set_cr4() · a447e38a
      Sean Christopherson 提交于
      Drop vmx_set_cr4()'s explicit check on the 'nested' module param now
      that common x86 handles the check by incorporating VMXE into the CR4
      reserved bits, via kvm_cpu_caps.  X86_FEATURE_VMX is set in kvm_cpu_caps
      (by vmx_set_cpu_caps()), if and only if 'nested' is true.
      
      No functional change intended.
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-3-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      a447e38a
    • S
      KVM: VMX: Drop guest CPUID check for VMXE in vmx_set_cr4() · d3a9e414
      Sean Christopherson 提交于
      Drop vmx_set_cr4()'s somewhat hidden guest_cpuid_has() check on VMXE now
      that common x86 handles the check by incorporating VMXE into the CR4
      reserved bits, i.e. in cr4_guest_rsvd_bits.  This fixes a bug where KVM
      incorrectly rejects KVM_SET_SREGS with CR4.VMXE=1 if it's executed
      before KVM_SET_CPUID{,2}.
      
      Fixes: 5e1746d6 ("KVM: nVMX: Allow setting the VMXE bit in CR4")
      Reported-by: NStas Sergeev <stsp@users.sourceforge.net>
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20201007014417.29276-2-sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d3a9e414
    • P
      kvm: mmu: fix is_tdp_mmu_check when the TDP MMU is not in use · c887c9b9
      Paolo Bonzini 提交于
      In some cases where shadow paging is in use, the root page will
      be either mmu->pae_root or vcpu->arch.mmu->lm_root.  Then it will
      not have an associated struct kvm_mmu_page, because it is allocated
      with alloc_page instead of kvm_mmu_alloc_page.
      
      Just return false quickly from is_tdp_mmu_root if the TDP MMU is
      not in use, which also includes the case where shadow paging is
      enabled.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c887c9b9
  2. 13 11月, 2020 8 次提交
    • B
      KVM: SVM: Update cr3_lm_rsvd_bits for AMD SEV guests · 96308b06
      Babu Moger 提交于
      For AMD SEV guests, update the cr3_lm_rsvd_bits to mask
      the memory encryption bit in reserved bits.
      Signed-off-by: NBabu Moger <babu.moger@amd.com>
      Message-Id: <160521948301.32054.5783800787423231162.stgit@bmoger-ubuntu>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      96308b06
    • B
      KVM: x86: Introduce cr3_lm_rsvd_bits in kvm_vcpu_arch · 0107973a
      Babu Moger 提交于
      SEV guests fail to boot on a system that supports the PCID feature.
      
      While emulating the RSM instruction, KVM reads the guest CR3
      and calls kvm_set_cr3(). If the vCPU is in the long mode,
      kvm_set_cr3() does a sanity check for the CR3 value. In this case,
      it validates whether the value has any reserved bits set. The
      reserved bit range is 63:cpuid_maxphysaddr(). When AMD memory
      encryption is enabled, the memory encryption bit is set in the CR3
      value. The memory encryption bit may fall within the KVM reserved
      bit range, causing the KVM emulation failure.
      
      Introduce a new field cr3_lm_rsvd_bits in kvm_vcpu_arch which will
      cache the reserved bits in the CR3 value. This will be initialized
      to rsvd_bits(cpuid_maxphyaddr(vcpu), 63).
      
      If the architecture has any special bits(like AMD SEV encryption bit)
      that needs to be masked from the reserved bits, should be cleared
      in vendor specific kvm_x86_ops.vcpu_after_set_cpuid handler.
      
      Fixes: a780a3ea ("KVM: X86: Fix reserved bits check for MOV to CR3")
      Signed-off-by: NBabu Moger <babu.moger@amd.com>
      Message-Id: <160521947657.32054.3264016688005356563.stgit@bmoger-ubuntu>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0107973a
    • D
      KVM: x86: clflushopt should be treated as a no-op by emulation · 51b958e5
      David Edmondson 提交于
      The instruction emulator ignores clflush instructions, yet fails to
      support clflushopt. Treat both similarly.
      
      Fixes: 13e457e0 ("KVM: x86: Emulator does not decode clflush well")
      Signed-off-by: NDavid Edmondson <david.edmondson@oracle.com>
      Message-Id: <20201103120400.240882-1-david.edmondson@oracle.com>
      Reviewed-by: NJoao Martins <joao.m.martins@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      51b958e5
    • P
      Merge tag 'kvmarm-fixes-5.10-3' of... · 2c38234c
      Paolo Bonzini 提交于
      Merge tag 'kvmarm-fixes-5.10-3' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
      
      KVM/arm64 fixes for v5.10, take #3
      
      - Allow userspace to downgrade ID_AA64PFR0_EL1.CSV2
      - Inject UNDEF on SCXTNUM_ELx access
      2c38234c
    • L
      Merge tag 'fscrypt-for-linus' of git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt · 585e5b17
      Linus Torvalds 提交于
      Pull fscrypt fix from Eric Biggers:
       "Fix a regression where new files weren't using inline encryption when
        they should be"
      
      * tag 'fscrypt-for-linus' of git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt:
        fscrypt: fix inline encryption not used on new files
      585e5b17
    • L
      Merge tag 'gfs2-v5.10-rc3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 · 20ca21df
      Linus Torvalds 提交于
      Pull gfs2 fixes from Andreas Gruenbacher:
       "Fix jdata data corruption and glock reference leak"
      
      * tag 'gfs2-v5.10-rc3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2:
        gfs2: Fix case in which ail writes are done to jdata holes
        Revert "gfs2: Ignore journal log writes for jdata holes"
        gfs2: fix possible reference leak in gfs2_check_blk_type
      20ca21df
    • L
      Merge tag 'net-5.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net · db7c9535
      Linus Torvalds 提交于
      Pull networking fixes from Jakub Kicinski:
       "Current release - regressions:
      
         - arm64: dts: fsl-ls1028a-kontron-sl28: specify in-band mode for
           ENETC
      
        Current release - bugs in new features:
      
         - mptcp: provide rmem[0] limit offset to fix oops
      
        Previous release - regressions:
      
         - IPv6: Set SIT tunnel hard_header_len to zero to fix path MTU
           calculations
      
         - lan743x: correctly handle chips with internal PHY
      
         - bpf: Don't rely on GCC __attribute__((optimize)) to disable GCSE
      
         - mlx5e: Fix VXLAN port table synchronization after function reload
      
        Previous release - always broken:
      
         - bpf: Zero-fill re-used per-cpu map element
      
         - fix out-of-order UDP packets when forwarding with UDP GSO fraglists
           turned on:
             - fix UDP header access on Fast/frag0 UDP GRO
             - fix IP header access and skb lookup on Fast/frag0 UDP GRO
      
         - ethtool: netlink: add missing netdev_features_change() call
      
         - net: Update window_clamp if SOCK_RCVBUF is set
      
         - igc: Fix returning wrong statistics
      
         - ch_ktls: fix multiple leaks and corner cases in Chelsio TLS offload
      
         - tunnels: Fix off-by-one in lower MTU bounds for ICMP/ICMPv6 replies
      
         - r8169: disable hw csum for short packets on all chip versions
      
         - vrf: Fix fast path output packet handling with async Netfilter
           rules"
      
      * tag 'net-5.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (65 commits)
        lan743x: fix use of uninitialized variable
        net: udp: fix IP header access and skb lookup on Fast/frag0 UDP GRO
        net: udp: fix UDP header access on Fast/frag0 UDP GRO
        devlink: Avoid overwriting port attributes of registered port
        vrf: Fix fast path output packet handling with async Netfilter rules
        cosa: Add missing kfree in error path of cosa_write
        net: switch to the kernel.org patchwork instance
        ch_ktls: stop the txq if reaches threshold
        ch_ktls: tcb update fails sometimes
        ch_ktls/cxgb4: handle partial tag alone SKBs
        ch_ktls: don't free skb before sending FIN
        ch_ktls: packet handling prior to start marker
        ch_ktls: Correction in middle record handling
        ch_ktls: missing handling of header alone
        ch_ktls: Correction in trimmed_len calculation
        cxgb4/ch_ktls: creating skbs causes panic
        ch_ktls: Update cheksum information
        ch_ktls: Correction in finding correct length
        cxgb4/ch_ktls: decrypted bit is not enough
        net/x25: Fix null-ptr-deref in x25_connect
        ...
      db7c9535
    • L
      Merge tag 'nfs-for-5.10-2' of git://git.linux-nfs.org/projects/anna/linux-nfs · 200f9d21
      Linus Torvalds 提交于
      Pull NFS client bugfixes from Anna Schumaker:
       "Stable fixes:
        - Fix failure to unregister shrinker
      
        Other fixes:
        - Fix unnecessary locking to clear up some contention
        - Fix listxattr receive buffer size
        - Fix default mount options for nfsroot"
      
      * tag 'nfs-for-5.10-2' of git://git.linux-nfs.org/projects/anna/linux-nfs:
        NFS: Remove unnecessary inode lock in nfs_fsync_dir()
        NFS: Remove unnecessary inode locking in nfs_llseek_dir()
        NFS: Fix listxattr receive buffer size
        NFSv4.2: fix failure to unregister shrinker
        nfsroot: Default mount option should ask for built-in NFS version
      200f9d21