1. 01 10月, 2015 7 次提交
  2. 06 9月, 2015 1 次提交
  3. 30 7月, 2015 1 次提交
  4. 29 7月, 2015 1 次提交
  5. 23 7月, 2015 2 次提交
  6. 10 7月, 2015 1 次提交
  7. 05 6月, 2015 2 次提交
    • P
      KVM: implement multiple address spaces · f481b069
      Paolo Bonzini 提交于
      Only two ioctls have to be modified; the address space id is
      placed in the higher 16 bits of their slot id argument.
      
      As of this patch, no architecture defines more than one
      address space; x86 will be the first.
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f481b069
    • P
      KVM: add vcpu-specific functions to read/write/translate GFNs · 8e73485c
      Paolo Bonzini 提交于
      We need to hide SMRAM from guests not running in SMM.  Therefore, all
      uses of kvm_read_guest* and kvm_write_guest* must be changed to use
      different address spaces, depending on whether the VCPU is in system
      management mode.  We need to introduce a new family of functions for
      this purpose.
      
      For now, the VCPU-based functions have the same behavior as the
      existing per-VM ones, they just accept a different type for the
      first argument.  Later however they will be changed to use one of many
      "struct kvm_memslots" stored in struct kvm, through an architecture hook.
      VM-based functions will unconditionally use the first memslots pointer.
      
      Whenever possible, this patch introduces slot-based functions with an
      __ prefix, with two wrappers for generic and vcpu-based actions.
      The exceptions are kvm_read_guest and kvm_write_guest, which are copied
      into the new functions kvm_vcpu_read_guest and kvm_vcpu_write_guest.
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8e73485c
  8. 04 6月, 2015 1 次提交
  9. 28 5月, 2015 2 次提交
  10. 26 5月, 2015 2 次提交
  11. 20 5月, 2015 1 次提交
    • P
      KVM: export __gfn_to_pfn_memslot, drop gfn_to_pfn_async · 3520469d
      Paolo Bonzini 提交于
      gfn_to_pfn_async is used in just one place, and because of x86-specific
      treatment that place will need to look at the memory slot.  Hence inline
      it into try_async_pf and export __gfn_to_pfn_memslot.
      
      The patch also switches the subsequent call to gfn_to_pfn_prot to use
      __gfn_to_pfn_memslot.  This is a small optimization.  Finally, remove
      the now-unused async argument of __gfn_to_pfn.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3520469d
  12. 07 5月, 2015 2 次提交
    • R
      kvm,x86: load guest FPU context more eagerly · 653f52c3
      Rik van Riel 提交于
      Currently KVM will clear the FPU bits in CR0.TS in the VMCS, and trap to
      re-load them every time the guest accesses the FPU after a switch back into
      the guest from the host.
      
      This patch copies the x86 task switch semantics for FPU loading, with the
      FPU loaded eagerly after first use if the system uses eager fpu mode,
      or if the guest uses the FPU frequently.
      
      In the latter case, after loading the FPU for 255 times, the fpu_counter
      will roll over, and we will revert to loading the FPU on demand, until
      it has been established that the guest is still actively using the FPU.
      
      This mirrors the x86 task switch policy, which seems to work.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      653f52c3
    • C
      KVM: provide irq_unsafe kvm_guest_{enter|exit} · 0097d12e
      Christian Borntraeger 提交于
      Several kvm architectures disable interrupts before kvm_guest_enter.
      kvm_guest_enter then uses local_irq_save/restore to disable interrupts
      again or for the first time. Lets provide underscore versions of
      kvm_guest_{enter|exit} that assume being called locked.
      kvm_guest_enter now disables interrupts for the full function and
      thus we can remove the check for preemptible.
      
      This patch then adopts s390/kvm to use local_irq_disable/enable calls
      which are slighty cheaper that local_irq_save/restore and call these
      new functions.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0097d12e
  13. 08 4月, 2015 1 次提交
    • N
      KVM: x86: BSP in MSR_IA32_APICBASE is writable · 58d269d8
      Nadav Amit 提交于
      After reset, the CPU can change the BSP, which will be used upon INIT.  Reset
      should return the BSP which QEMU asked for, and therefore handled accordingly.
      
      To quote: "If the MP protocol has completed and a BSP is chosen, subsequent
      INITs (either to a specific processor or system wide) do not cause the MP
      protocol to be repeated."
      [Intel SDM 8.4.2: MP Initialization Protocol Requirements and Restrictions]
      Signed-off-by: NNadav Amit <namit@cs.technion.ac.il>
      Message-Id: <1427933438-12782-3-git-send-email-namit@cs.technion.ac.il>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      58d269d8
  14. 27 3月, 2015 1 次提交
  15. 12 3月, 2015 1 次提交
  16. 10 3月, 2015 1 次提交
  17. 09 3月, 2015 1 次提交
    • R
      kvm,rcu,nohz: use RCU extended quiescent state when running KVM guest · 126a6a54
      Rik van Riel 提交于
      The host kernel is not doing anything while the CPU is executing
      a KVM guest VCPU, so it can be marked as being in an extended
      quiescent state, identical to that used when running user space
      code.
      
      The only exception to that rule is when the host handles an
      interrupt, which is already handled by the irq code, which
      calls rcu_irq_enter and rcu_irq_exit.
      
      The guest_enter and guest_exit functions already switch vtime
      accounting independent of context tracking. Leave those calls
      where they are, instead of moving them into the context tracking
      code.
      Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Will deacon <will.deacon@arm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      126a6a54
  18. 12 2月, 2015 1 次提交
  19. 05 2月, 2015 1 次提交
  20. 29 1月, 2015 1 次提交
  21. 23 1月, 2015 1 次提交
  22. 21 1月, 2015 2 次提交
  23. 16 1月, 2015 1 次提交
  24. 04 12月, 2014 2 次提交
    • I
      kvm: optimize GFN to memslot lookup with large slots amount · 9c1a5d38
      Igor Mammedov 提交于
      Current linear search doesn't scale well when
      large amount of memslots is used and looked up slot
      is not in the beginning memslots array.
      Taking in account that memslots don't overlap, it's
      possible to switch sorting order of memslots array from
      'npages' to 'base_gfn' and use binary search for
      memslot lookup by GFN.
      
      As result of switching to binary search lookup times
      are reduced with large amount of memslots.
      
      Following is a table of search_memslot() cycles
      during WS2008R2 guest boot.
      
                               boot,          boot + ~10 min
                               mostly same    of using it,
                               slot lookup    randomized lookup
                      max      average        average
                      cycles   cycles         cycles
      
      13 slots      : 1450       28           30
      
      13 slots      : 1400       30           40
      binary search
      
      117 slots     : 13000      30           460
      
      117 slots     : 2000       35           180
      binary search
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9c1a5d38
    • I
      kvm: search_memslots: add simple LRU memslot caching · d4ae84a0
      Igor Mammedov 提交于
      In typical guest boot workload only 2-3 memslots are used
      extensively, and at that it's mostly the same memslot
      lookup operation.
      
      Adding LRU cache improves average lookup time from
      46 to 28 cycles (~40%) for this workload.
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d4ae84a0
  25. 26 11月, 2014 1 次提交
    • A
      kvm: fix kvm_is_mmio_pfn() and rename to kvm_is_reserved_pfn() · d3fccc7e
      Ard Biesheuvel 提交于
      This reverts commit 85c8555f ("KVM: check for !is_zero_pfn() in
      kvm_is_mmio_pfn()") and renames the function to kvm_is_reserved_pfn.
      
      The problem being addressed by the patch above was that some ARM code
      based the memory mapping attributes of a pfn on the return value of
      kvm_is_mmio_pfn(), whose name indeed suggests that such pfns should
      be mapped as device memory.
      
      However, kvm_is_mmio_pfn() doesn't do quite what it says on the tin,
      and the existing non-ARM users were already using it in a way which
      suggests that its name should probably have been 'kvm_is_reserved_pfn'
      from the beginning, e.g., whether or not to call get_page/put_page on
      it etc. This means that returning false for the zero page is a mistake
      and the patch above should be reverted.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d3fccc7e
  26. 25 11月, 2014 2 次提交