1. 24 6月, 2022 1 次提交
    • B
      KVM: selftests: Add NX huge pages test · 8448ec59
      Ben Gardon 提交于
      There's currently no test coverage of NX hugepages in KVM selftests, so
      add a basic test to ensure that the feature works as intended.
      
      The test creates a VM with a data slot backed with huge pages. The
      memory in the data slot is filled with op-codes for the return
      instruction. The guest then executes a series of accesses on the memory,
      some reads, some instruction fetches. After each operation, the guest
      exits and the test performs some checks on the backing page counts to
      ensure that NX page splitting an reclaim work as expected.
      Reviewed-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20220613212523.3436117-7-bgardon@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8448ec59
  2. 23 6月, 2022 1 次提交
    • S
      KVM: selftests: Add MONITOR/MWAIT quirk test · 2325d4dd
      Sean Christopherson 提交于
      Add a test to verify the "MONITOR/MWAIT never fault" quirk, and as a
      bonus, also verify the related "MISC_ENABLES ignores ENABLE_MWAIT" quirk.
      
      If the "never fault" quirk is enabled, MONITOR/MWAIT should always be
      emulated as NOPs, even if they're reported as disabled in guest CPUID.
      Use the MISC_ENABLES quirk to coerce KVM into toggling the MWAIT CPUID
      enable, as KVM now disallows manually toggling CPUID bits after running
      the vCPU.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220608224516.3788274-6-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2325d4dd
  3. 08 6月, 2022 3 次提交
  4. 25 5月, 2022 1 次提交
  5. 04 5月, 2022 3 次提交
  6. 12 4月, 2022 1 次提交
  7. 06 4月, 2022 1 次提交
  8. 02 4月, 2022 1 次提交
  9. 08 3月, 2022 1 次提交
    • S
      KVM: selftests: Add test to populate a VM with the max possible guest mem · b58c55d5
      Sean Christopherson 提交于
      Add a selftest that enables populating a VM with the maximum amount of
      guest memory allowed by the underlying architecture.  Abuse KVM's
      memslots by mapping a single host memory region into multiple memslots so
      that the selftest doesn't require a system with terabytes of RAM.
      
      Default to 512gb of guest memory, which isn't all that interesting, but
      should work on all MMUs and doesn't take an exorbitant amount of memory
      or time.  E.g. testing with ~64tb of guest memory takes the better part
      of an hour, and requires 200gb of memory for KVM's page tables when using
      4kb pages.
      
      To inflicit maximum abuse on KVM' MMU, default to 4kb pages (or whatever
      the not-hugepage size is) in the backing store (memfd).  Use memfd for
      the host backing store to ensure that hugepages are guaranteed when
      requested, and to give the user explicit control of the size of hugepage
      being tested.
      
      By default, spin up as many vCPUs as there are available to the selftest,
      and distribute the work of dirtying each 4kb chunk of memory across all
      vCPUs.  Dirtying guest memory forces KVM to populate its page tables, and
      also forces KVM to write back accessed/dirty information to struct page
      when the guest memory is freed.
      
      On x86, perform two passes with a MMU context reset between each pass to
      coerce KVM into dropping all references to the MMU root, e.g. to emulate
      a vCPU dropping the last reference.  Perform both passes and all
      rendezvous on all architectures in the hope that arm64 and s390x can gain
      similar shenanigans in the future.
      
      Measure and report the duration of each operation, which is helpful not
      only to verify the test is working as intended, but also to easily
      evaluate the performance differences different page sizes.
      
      Provide command line options to limit the amount of guest memory, set the
      size of each slot (i.e. of the host memory region), set the number of
      vCPUs, and to enable usage of hugepages.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220226001546.360188-29-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b58c55d5
  10. 04 3月, 2022 1 次提交
  11. 01 3月, 2022 1 次提交
    • S
      KVM: selftests: Add test to verify KVM handling of ICR · 85c68eb4
      Sean Christopherson 提交于
      The main thing that the selftest verifies is that KVM copies x2APIC's
      ICR[63:32] to/from ICR2 when userspace accesses the vAPIC page via
      KVM_{G,S}ET_LAPIC.  KVM previously split x2APIC ICR to ICR+ICR2 at the
      time of write (from the guest), and so KVM must preserve that behavior
      for backwards compatibility between different versions of KVM.
      
      It will also test other invariants, e.g. that KVM clears the BUSY
      flag on ICR writes, that the reserved bits in ICR2 are dropped on writes
      from the guest, etc...
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220204214205.3306634-12-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      85c68eb4
  12. 14 2月, 2022 1 次提交
  13. 20 1月, 2022 3 次提交
  14. 18 1月, 2022 1 次提交
  15. 29 12月, 2021 1 次提交
  16. 20 12月, 2021 1 次提交
    • S
      KVM: selftests: Add test to verify TRIPLE_FAULT on invalid L2 guest state · ab1ef344
      Sean Christopherson 提交于
      Add a selftest to attempt to enter L2 with invalid guests state by
      exiting to userspace via I/O from L2, and then using KVM_SET_SREGS to set
      invalid guest state (marking TR unusable is arbitrary chosen for its
      relative simplicity).
      
      This is a regression test for a bug introduced by commit c8607e4a
      ("KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if
      !from_vmentry"), which incorrectly set vmx->fail=true when L2 had invalid
      guest state and ultimately triggered a WARN due to nested_vmx_vmexit()
      seeing vmx->fail==true while attempting to synthesize a nested VM-Exit.
      
      The is also a functional test to verify that KVM sythesizes TRIPLE_FAULT
      for L2, which is somewhat arbitrary behavior, instead of emulating L2.
      KVM should never emulate L2 due to invalid guest state, as it's
      architecturally impossible for L1 to run an L2 guest with invalid state
      as nested VM-Enter should always fail, i.e. L1 needs to do the emulation.
      Stuffing state via KVM ioctl() is a non-architctural, out-of-band case,
      hence the TRIPLE_FAULT being rather arbitrary.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20211207193006.120997-5-seanjc@google.com>
      Reviewed-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ab1ef344
  17. 10 12月, 2021 1 次提交
  18. 18 11月, 2021 1 次提交
    • A
      selftests: KVM: Add /x86_64/sev_migrate_tests to .gitignore · b768f60b
      Arnaldo Carvalho de Melo 提交于
        $ git status
        nothing to commit, working tree clean
        $
        $ make -C tools/testing/selftests/kvm/ > /dev/null 2>&1
        $ git status
      
        Untracked files:
          (use "git add <file>..." to include in what will be committed)
        	tools/testing/selftests/kvm/x86_64/sev_migrate_tests
      
        nothing added to commit but untracked files present (use "git add" to track)
        $
      
      Fixes: 6a581508 ("selftest: KVM: Add intra host migration tests")
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Marc Orr <marcorr@google.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Gonda <pgonda@google.com>
      Cc: Sean Christopherson <seanjc@google.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Message-Id: <YZPIPfvYgRDCZi/w@kernel.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b768f60b
  19. 19 10月, 2021 2 次提交
  20. 17 10月, 2021 1 次提交
  21. 23 9月, 2021 1 次提交
    • M
      KVM: x86: selftests: test simultaneous uses of V_IRQ from L1 and L0 · 1ad32105
      Maxim Levitsky 提交于
      Test that if:
      
      * L1 disables virtual interrupt masking, and INTR intercept.
      
      * L1 setups a virtual interrupt to be injected to L2 and enters L2 with
        interrupts disabled, thus the virtual interrupt is pending.
      
      * Now an external interrupt arrives in L1 and since
        L1 doesn't intercept it, it should be delivered to L2 when
        it enables interrupts.
      
        to do this L0 (abuses) V_IRQ to setup an
        interrupt window, and returns to L2.
      
      * L2 enables interrupts.
        This should trigger the interrupt window,
        injection of the external interrupt and delivery
        of the virtual interrupt that can now be done.
      
      * Test that now L2 gets those interrupts.
      
      This is the test that demonstrates the issue that was
      fixed in the previous patch.
      Signed-off-by: NMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20210914154825.104886-3-mlevitsk@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1ad32105
  22. 22 9月, 2021 1 次提交
  23. 19 8月, 2021 1 次提交
  24. 28 7月, 2021 1 次提交
    • D
      KVM: selftests: Introduce access_tracking_perf_test · c33e05d9
      David Matlack 提交于
      This test measures the performance effects of KVM's access tracking.
      Access tracking is driven by the MMU notifiers test_young, clear_young,
      and clear_flush_young. These notifiers do not have a direct userspace
      API, however the clear_young notifier can be triggered by marking a
      pages as idle in /sys/kernel/mm/page_idle/bitmap. This test leverages
      that mechanism to enable access tracking on guest memory.
      
      To measure performance this test runs a VM with a configurable number of
      vCPUs that each touch every page in disjoint regions of memory.
      Performance is measured in the time it takes all vCPUs to finish
      touching their predefined region.
      
      Example invocation:
      
        $ ./access_tracking_perf_test -v 8
        Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
        guest physical test memory offset: 0xffdfffff000
      
        Populating memory             : 1.337752570s
        Writing to populated memory   : 0.010177640s
        Reading from populated memory : 0.009548239s
        Mark memory idle              : 23.973131748s
        Writing to idle memory        : 0.063584496s
        Mark memory idle              : 24.924652964s
        Reading from idle memory      : 0.062042814s
      
      Breaking down the results:
      
       * "Populating memory": The time it takes for all vCPUs to perform the
         first write to every page in their region.
      
       * "Writing to populated memory" / "Reading from populated memory": The
         time it takes for all vCPUs to write and read to every page in their
         region after it has been populated. This serves as a control for the
         later results.
      
       * "Mark memory idle": The time it takes for every vCPU to mark every
         page in their region as idle through page_idle.
      
       * "Writing to idle memory" / "Reading from idle memory": The time it
         takes for all vCPUs to write and read to every page in their region
         after it has been marked idle.
      
      This test should be portable across architectures but it is only enabled
      for x86_64 since that's all I have tested.
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Signed-off-by: NDavid Matlack <dmatlack@google.com>
      Message-Id: <20210713220957.3493520-7-dmatlack@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c33e05d9
  25. 25 6月, 2021 2 次提交
  26. 24 6月, 2021 1 次提交
    • S
      KVM: sefltests: Add x86-64 test to verify MMU reacts to CPUID updates · ef6a74b2
      Sean Christopherson 提交于
      Add an x86-only test to verify that x86's MMU reacts to CPUID updates
      that impact the MMU.  KVM has had multiple bugs where it fails to
      reconfigure the MMU after the guest's vCPU model changes.
      
      Sadly, this test is effectively limited to shadow paging because the
      hardware page walk handler doesn't support software disabling of GBPAGES
      support, and KVM doesn't manually walk the GVA->GPA on faults for
      performance reasons (doing so would large defeat the benefits of TDP).
      
      Don't require !TDP for the tests as there is still value in running the
      tests with TDP, even though the tests will fail (barring KVM hacks).
      E.g. KVM should not completely explode if MAXPHYADDR results in KVM using
      4-level vs. 5-level paging for the guest.
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20210622200529.3650424-20-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ef6a74b2
  27. 22 6月, 2021 1 次提交
  28. 18 6月, 2021 2 次提交
  29. 14 6月, 2021 1 次提交
  30. 27 5月, 2021 1 次提交
    • M
      KVM: selftests: add a memslot-related performance benchmark · cad347fa
      Maciej S. Szmigiero 提交于
      This benchmark contains the following tests:
      * Map test, where the host unmaps guest memory while the guest writes to
      it (maps it).
      
      The test is designed in a way to make the unmap operation on the host
      take a negligible amount of time in comparison with the mapping
      operation in the guest.
      
      The test area is actually split in two: the first half is being mapped
      by the guest while the second half in being unmapped by the host.
      Then a guest <-> host sync happens and the areas are reversed.
      
      * Unmap test which is broadly similar to the above map test, but it is
      designed in an opposite way: to make the mapping operation in the guest
      take a negligible amount of time in comparison with the unmap operation
      on the host.
      This test is available in two variants: with per-page unmap operation
      or a chunked one (using 2 MiB chunk size).
      
      * Move active area test which involves moving the last (highest gfn)
      memslot a bit back and forth on the host while the guest is
      concurrently writing around the area being moved (including over the
      moved memslot).
      
      * Move inactive area test which is similar to the previous move active
      area test, but now guest writes all happen outside of the area being
      moved.
      
      * Read / write test in which the guest writes to the beginning of each
      page of the test area while the host writes to the middle of each such
      page.
      Then each side checks the values the other side has written.
      This particular test is not expected to give different results depending
      on particular memslots implementation, it is meant as a rough sanity
      check and to provide insight on the spread of test results expected.
      
      Each test performs its operation in a loop until a test period ends
      (this is 5 seconds by default, but it is configurable).
      Then the total count of loops done is divided by the actual elapsed
      time to give the test result.
      
      The tests have a configurable memslot cap with the "-s" test option, by
      default the system maximum is used.
      Each test is repeated a particular number of times (by default 20
      times), the best result achieved is printed.
      
      The test memory area is divided equally between memslots, the reminder
      is added to the last memslot.
      The test area size does not depend on the number of memslots in use.
      
      The tests also measure the time that it took to add all these memslots.
      The best result from the tests that use the whole test area is printed
      after all the requested tests are done.
      
      In general, these tests are designed to use as much memory as possible
      (within reason) while still doing 100+ loops even on high memslot counts
      with the default test length.
      Increasing the test runtime makes it increasingly more likely that some
      event will happen on the system during the test run, which might lower
      the test result.
      Signed-off-by: NMaciej S. Szmigiero <maciej.szmigiero@oracle.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <8d31bb3d92bc8fa33a9756fa802ee14266ab994e.1618253574.git.maciej.szmigiero@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cad347fa
  31. 20 4月, 2021 1 次提交
    • Y
      KVM: selftests: Add a test for kvm page table code · b9c2bd50
      Yanan Wang 提交于
      This test serves as a performance tester and a bug reproducer for
      kvm page table code (GPA->HPA mappings), so it gives guidance for
      people trying to make some improvement for kvm.
      
      The function guest_code() can cover the conditions where a single vcpu or
      multiple vcpus access guest pages within the same memory region, in three
      VM stages(before dirty logging, during dirty logging, after dirty logging).
      Besides, the backing src memory type(ANONYMOUS/THP/HUGETLB) of the tested
      memory region can be specified by users, which means normal page mappings
      or block mappings can be chosen by users to be created in the test.
      
      If ANONYMOUS memory is specified, kvm will create normal page mappings
      for the tested memory region before dirty logging, and update attributes
      of the page mappings from RO to RW during dirty logging. If THP/HUGETLB
      memory is specified, kvm will create block mappings for the tested memory
      region before dirty logging, and split the blcok mappings into normal page
      mappings during dirty logging, and coalesce the page mappings back into
      block mappings after dirty logging is stopped.
      
      So in summary, as a performance tester, this test can present the
      performance of kvm creating/updating normal page mappings, or the
      performance of kvm creating/splitting/recovering block mappings,
      through execution time.
      
      When we need to coalesce the page mappings back to block mappings after
      dirty logging is stopped, we have to firstly invalidate *all* the TLB
      entries for the page mappings right before installation of the block entry,
      because a TLB conflict abort error could occur if we can't invalidate the
      TLB entries fully. We have hit this TLB conflict twice on aarch64 software
      implementation and fixed it. As this test can imulate process from dirty
      logging enabled to dirty logging stopped of a VM with block mappings,
      so it can also reproduce this TLB conflict abort due to inadequate TLB
      invalidation when coalescing tables.
      Signed-off-by: NYanan Wang <wangyanan55@huawei.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <20210330080856.14940-11-wangyanan55@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b9c2bd50