1. 31 10月, 2013 1 次提交
    • B
      kvm: Add KVM_GET_EMULATED_CPUID · 9c15bb1d
      Borislav Petkov 提交于
      Add a kvm ioctl which states which system functionality kvm emulates.
      The format used is that of CPUID and we return the corresponding CPUID
      bits set for which we do emulate functionality.
      
      Make sure ->padding is being passed on clean from userspace so that we
      can use it for something in the future, after the ioctl gets cast in
      stone.
      
      s/kvm_dev_ioctl_get_supported_cpuid/kvm_dev_ioctl_get_cpuid/ while at
      it.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9c15bb1d
  2. 28 10月, 2013 5 次提交
  3. 18 10月, 2013 4 次提交
    • C
      KVM: ARM: Transparent huge page (THP) support · 9b5fdb97
      Christoffer Dall 提交于
      Support transparent huge pages in KVM/ARM and KVM/ARM64.  The
      transparent_hugepage_adjust is not very pretty, but this is also how
      it's solved on x86 and seems to be simply an artifact on how THPs
      behave.  This should eventually be shared across architectures if
      possible, but that can always be changed down the road.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      9b5fdb97
    • C
      KVM: ARM: Support hugetlbfs backed huge pages · ad361f09
      Christoffer Dall 提交于
      Support huge pages in KVM/ARM and KVM/ARM64.  The pud_huge checking on
      the unmap path may feel a bit silly as the pud_huge check is always
      defined to false, but the compiler should be smart about this.
      
      Note: This deals only with VMAs marked as huge which are allocated by
      users through hugetlbfs only.  Transparent huge pages can only be
      detected by looking at the underlying pages (or the page tables
      themselves) and this patch so far simply maps these on a page-by-page
      level in the Stage-2 page tables.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      ad361f09
    • C
      KVM: ARM: Update comments for kvm_handle_wfi · 86ed81aa
      Christoffer Dall 提交于
      Update comments to reflect what is really going on and add the TWE bit
      to the comments in kvm_arm.h.
      
      Also renames the function to kvm_handle_wfx like is done on arm64 for
      consistency and uber-correctness.
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      86ed81aa
    • M
      ARM: KVM: Yield CPU when vcpu executes a WFE · 58d5ec8f
      Marc Zyngier 提交于
      On an (even slightly) oversubscribed system, spinlocks are quickly
      becoming a bottleneck, as some vcpus are spinning, waiting for a
      lock to be released, while the vcpu holding the lock may not be
      running at all.
      
      This creates contention, and the observed slowdown is 40x for
      hackbench. No, this isn't a typo.
      
      The solution is to trap blocking WFEs and tell KVM that we're
      now spinning. This ensures that other vpus will get a scheduling
      boost, allowing the lock to be released more quickly. Also, using
      CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT slightly improves the performance
      when the VM is severely overcommited.
      
      Quick test to estimate the performance: hackbench 1 process 1000
      
      2xA15 host (baseline):	1.843s
      
      2xA15 guest w/o patch:	2.083s
      4xA15 guest w/o patch:	80.212s
      8xA15 guest w/o patch:	Could not be bothered to find out
      
      2xA15 guest w/ patch:	2.102s
      4xA15 guest w/ patch:	3.205s
      8xA15 guest w/ patch:	6.887s
      
      So we go from a 40x degradation to 1.5x in the 2x overcommit case,
      which is vaguely more acceptable.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      58d5ec8f
  4. 17 10月, 2013 1 次提交
  5. 16 10月, 2013 6 次提交
  6. 15 10月, 2013 11 次提交
  7. 14 10月, 2013 12 次提交