1. 14 6月, 2021 1 次提交
  2. 27 5月, 2021 1 次提交
    • M
      KVM: selftests: add a memslot-related performance benchmark · cad347fa
      Maciej S. Szmigiero 提交于
      This benchmark contains the following tests:
      * Map test, where the host unmaps guest memory while the guest writes to
      it (maps it).
      
      The test is designed in a way to make the unmap operation on the host
      take a negligible amount of time in comparison with the mapping
      operation in the guest.
      
      The test area is actually split in two: the first half is being mapped
      by the guest while the second half in being unmapped by the host.
      Then a guest <-> host sync happens and the areas are reversed.
      
      * Unmap test which is broadly similar to the above map test, but it is
      designed in an opposite way: to make the mapping operation in the guest
      take a negligible amount of time in comparison with the unmap operation
      on the host.
      This test is available in two variants: with per-page unmap operation
      or a chunked one (using 2 MiB chunk size).
      
      * Move active area test which involves moving the last (highest gfn)
      memslot a bit back and forth on the host while the guest is
      concurrently writing around the area being moved (including over the
      moved memslot).
      
      * Move inactive area test which is similar to the previous move active
      area test, but now guest writes all happen outside of the area being
      moved.
      
      * Read / write test in which the guest writes to the beginning of each
      page of the test area while the host writes to the middle of each such
      page.
      Then each side checks the values the other side has written.
      This particular test is not expected to give different results depending
      on particular memslots implementation, it is meant as a rough sanity
      check and to provide insight on the spread of test results expected.
      
      Each test performs its operation in a loop until a test period ends
      (this is 5 seconds by default, but it is configurable).
      Then the total count of loops done is divided by the actual elapsed
      time to give the test result.
      
      The tests have a configurable memslot cap with the "-s" test option, by
      default the system maximum is used.
      Each test is repeated a particular number of times (by default 20
      times), the best result achieved is printed.
      
      The test memory area is divided equally between memslots, the reminder
      is added to the last memslot.
      The test area size does not depend on the number of memslots in use.
      
      The tests also measure the time that it took to add all these memslots.
      The best result from the tests that use the whole test area is printed
      after all the requested tests are done.
      
      In general, these tests are designed to use as much memory as possible
      (within reason) while still doing 100+ loops even on high memslot counts
      with the default test length.
      Increasing the test runtime makes it increasingly more likely that some
      event will happen on the system during the test run, which might lower
      the test result.
      Signed-off-by: NMaciej S. Szmigiero <maciej.szmigiero@oracle.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <8d31bb3d92bc8fa33a9756fa802ee14266ab994e.1618253574.git.maciej.szmigiero@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cad347fa
  3. 20 4月, 2021 1 次提交
    • Y
      KVM: selftests: Add a test for kvm page table code · b9c2bd50
      Yanan Wang 提交于
      This test serves as a performance tester and a bug reproducer for
      kvm page table code (GPA->HPA mappings), so it gives guidance for
      people trying to make some improvement for kvm.
      
      The function guest_code() can cover the conditions where a single vcpu or
      multiple vcpus access guest pages within the same memory region, in three
      VM stages(before dirty logging, during dirty logging, after dirty logging).
      Besides, the backing src memory type(ANONYMOUS/THP/HUGETLB) of the tested
      memory region can be specified by users, which means normal page mappings
      or block mappings can be chosen by users to be created in the test.
      
      If ANONYMOUS memory is specified, kvm will create normal page mappings
      for the tested memory region before dirty logging, and update attributes
      of the page mappings from RO to RW during dirty logging. If THP/HUGETLB
      memory is specified, kvm will create block mappings for the tested memory
      region before dirty logging, and split the blcok mappings into normal page
      mappings during dirty logging, and coalesce the page mappings back into
      block mappings after dirty logging is stopped.
      
      So in summary, as a performance tester, this test can present the
      performance of kvm creating/updating normal page mappings, or the
      performance of kvm creating/splitting/recovering block mappings,
      through execution time.
      
      When we need to coalesce the page mappings back to block mappings after
      dirty logging is stopped, we have to firstly invalidate *all* the TLB
      entries for the page mappings right before installation of the block entry,
      because a TLB conflict abort error could occur if we can't invalidate the
      TLB entries fully. We have hit this TLB conflict twice on aarch64 software
      implementation and fixed it. As this test can imulate process from dirty
      logging enabled to dirty logging stopped of a VM with block mappings,
      so it can also reproduce this TLB conflict abort due to inadequate TLB
      invalidation when coalescing tables.
      Signed-off-by: NYanan Wang <wangyanan55@huawei.com>
      Reviewed-by: NBen Gardon <bgardon@google.com>
      Reviewed-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <20210330080856.14940-11-wangyanan55@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b9c2bd50
  4. 06 4月, 2021 1 次提交
  5. 19 3月, 2021 2 次提交
  6. 18 3月, 2021 1 次提交
    • V
      selftests: kvm: Add basic Hyper-V clocksources tests · 2c7f76b4
      Vitaly Kuznetsov 提交于
      Introduce a new selftest for Hyper-V clocksources (MSR-based reference TSC
      and TSC page). As a starting point, test the following:
      1) Reference TSC is 1Ghz clock.
      2) Reference TSC and TSC page give the same reading.
      3) TSC page gets updated upon KVM_SET_CLOCK call.
      4) TSC page does not get updated when guest opted for reenlightenment.
      5) Disabled TSC page doesn't get updated.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Message-Id: <20210318140949.1065740-1-vkuznets@redhat.com>
      [Add a host-side test using TSC + KVM_GET_MSR too. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2c7f76b4
  7. 16 2月, 2021 1 次提交
  8. 11 2月, 2021 1 次提交
  9. 04 2月, 2021 4 次提交
  10. 12 12月, 2020 2 次提交
    • A
      selftests: kvm: Merge user_msr_test into userspace_msr_exit_test · fb636053
      Aaron Lewis 提交于
      Both user_msr_test and userspace_msr_exit_test tests the functionality
      of kvm_msr_filter.  Instead of testing this feature in two tests, merge
      them together, so there is only one test for this feature.
      Signed-off-by: NAaron Lewis <aaronlewis@google.com>
      Message-Id: <20201204172530.2958493-1-aaronlewis@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fb636053
    • A
      selftests: kvm: Test MSR exiting to userspace · 3cea1891
      Aaron Lewis 提交于
      Add a selftest to test that when the ioctl KVM_X86_SET_MSR_FILTER is
      called with an MSR list, those MSRs exit to userspace.
      
      This test uses 3 MSRs to test this:
        1. MSR_IA32_XSS, an MSR the kernel knows about.
        2. MSR_IA32_FLUSH_CMD, an MSR the kernel does not know about.
        3. MSR_NON_EXISTENT, an MSR invented in this test for the purposes of
           passing a fake MSR from the guest to userspace.  KVM just acts as a
           pass through.
      
      Userspace is also able to inject a #GP.  This is demonstrated when
      MSR_IA32_XSS and MSR_IA32_FLUSH_CMD are misused in the test.  When this
      happens a #GP is initiated in userspace to be thrown in the guest which is
      handled gracefully by the exception handling framework introduced earlier
      in this series.
      
      Tests for the generic instruction emulator were also added.  For this to
      work the module parameter kvm.force_emulation_prefix=1 has to be enabled.
      If it isn't enabled the tests will be skipped.
      
      A test was also added to ensure the MSR permission bitmap is being set
      correctly by executing reads and writes of MSR_FS_BASE and MSR_GS_BASE
      in the guest while alternating which MSR userspace should intercept.  If
      the permission bitmap is being set correctly only one of the MSRs should
      be coming through at a time, and the guest should be able to read and
      write the other one directly.
      Signed-off-by: NAaron Lewis <aaronlewis@google.com>
      Reviewed-by: NAlexander Graf <graf@amazon.com>
      Message-Id: <20201012194716.3950330-5-aaronlewis@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3cea1891
  11. 15 11月, 2020 1 次提交
  12. 08 11月, 2020 4 次提交
    • B
      KVM: selftests: Introduce the dirty log perf test · 4fd94ec7
      Ben Gardon 提交于
      The dirty log perf test will time verious dirty logging operations
      (enabling dirty logging, dirtying memory, getting the dirty log,
      clearing the dirty log, and disabling dirty logging) in order to
      quantify dirty logging performance. This test can be used to inform
      future performance improvements to KVM's dirty logging infrastructure.
      
      This series was tested by running the following invocations on an Intel
      Skylake machine:
      dirty_log_perf_test -b 20m -i 100 -v 64
      dirty_log_perf_test -b 20g -i 5 -v 4
      dirty_log_perf_test -b 4g -i 5 -v 32
      demand_paging_test -b 20m -v 64
      demand_paging_test -b 20g -v 4
      demand_paging_test -b 4g -v 32
      All behaved as expected.
      Signed-off-by: NBen Gardon <bgardon@google.com>
      Message-Id: <20201027233733.1484855-6-bgardon@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4fd94ec7
    • A
      KVM: selftests: Add blessed SVE registers to get-reg-list · 31d21295
      Andrew Jones 提交于
      Add support for the SVE registers to get-reg-list and create a
      new test, get-reg-list-sve, which tests them when running on a
      machine with SVE support.
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <20201029201703.102716-5-drjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      31d21295
    • A
      KVM: selftests: Add aarch64 get-reg-list test · fd02029a
      Andrew Jones 提交于
      Check for KVM_GET_REG_LIST regressions. The blessed list was
      created by running on v4.15 with the --core-reg-fixup option.
      The following script was also used in order to annotate system
      registers with their names when possible. When new system
      registers are added the names can just be added manually using
      the same grep.
      
      while read reg; do
      	if [[ ! $reg =~ ARM64_SYS_REG ]]; then
      		printf "\t$reg\n"
      		continue
      	fi
      	encoding=$(echo "$reg" | sed "s/ARM64_SYS_REG(//;s/),//")
      	if ! name=$(grep "$encoding" ../../../../arch/arm64/include/asm/sysreg.h); then
      		printf "\t$reg\n"
      		continue
      	fi
      	name=$(echo "$name" | sed "s/.*SYS_//;s/[\t ]*sys_reg($encoding)$//")
      	printf "\t$reg\t/* $name */\n"
      done < <(aarch64/get-reg-list --core-reg-fixup --list)
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Message-Id: <20201029201703.102716-3-drjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fd02029a
    • O
      selftests: kvm: test enforcement of paravirtual cpuid features · ac4a4d6d
      Oliver Upton 提交于
      Add a set of tests that ensure the guest cannot access paravirtual msrs
      and hypercalls that have been disabled in the KVM_CPUID_FEATURES leaf.
      Expect a #GP in the case of msr accesses and -KVM_ENOSYS from
      hypercalls.
      
      Cc: Jim Mattson <jmattson@google.com>
      Signed-off-by: NOliver Upton <oupton@google.com>
      Reviewed-by: NPeter Shier <pshier@google.com>
      Reviewed-by: NAaron Lewis <aaronlewis@google.com>
      Message-Id: <20201027231044.655110-7-oupton@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ac4a4d6d
  13. 31 10月, 2020 1 次提交
  14. 28 9月, 2020 1 次提交
  15. 08 6月, 2020 1 次提交
  16. 01 6月, 2020 1 次提交
  17. 16 4月, 2020 1 次提交
  18. 25 3月, 2020 1 次提交
  19. 17 3月, 2020 3 次提交
  20. 25 2月, 2020 1 次提交
  21. 22 10月, 2019 2 次提交
  22. 09 8月, 2019 1 次提交
  23. 19 6月, 2019 1 次提交
  24. 08 5月, 2019 3 次提交
  25. 12 2月, 2019 1 次提交
  26. 20 10月, 2018 1 次提交
  27. 17 10月, 2018 1 次提交