1. 27 12月, 2011 4 次提交
    • X
      KVM: introduce KVM_MEM_SLOTS_NUM macro · 93a5cef0
      Xiao Guangrong 提交于
      Introduce KVM_MEM_SLOTS_NUM macro to instead of
      KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      93a5cef0
    • T
      KVM: Count the number of dirty pages for dirty logging · 7850ac54
      Takuya Yoshikawa 提交于
      Needed for the next patch which uses this number to decide how to write
      protect a slot.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      7850ac54
    • E
      KVM: Fix include dependency for mmu_notifier · b297e672
      Eric B Munson 提交于
      The kvm_host struct can include an mmu_notifier struct but mmu_notifier.h is
      not included directly.
      Signed-off-by: NEric B Munson <emunson@mgebm.net>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      b297e672
    • N
      KVM: nVMX: Add KVM_REQ_IMMEDIATE_EXIT · d6185f20
      Nadav Har'El 提交于
      This patch adds a new vcpu->requests bit, KVM_REQ_IMMEDIATE_EXIT.
      This bit requests that when next entering the guest, we should run it only
      for as little as possible, and exit again.
      
      We use this new option in nested VMX: When L1 launches L2, but L0 wishes L1
      to continue running so it can inject an event to it, we unfortunately cannot
      just pretend to have run L2 for a little while - We must really launch L2,
      otherwise certain one-off vmcs12 parameters (namely, L1 injection into L2)
      will be lost. So the existing code runs L2 in this case.
      But L2 could potentially run for a long time until it exits, and the
      injection into L1 will be delayed. The new KVM_REQ_IMMEDIATE_EXIT allows us
      to request that L2 will be entered, as necessary, but will exit as soon as
      possible after entry.
      
      Our implementation of this request uses smp_send_reschedule() to send a
      self-IPI, with interrupts disabled. The interrupts remain disabled until the
      guest is entered, and then, after the entry is complete (often including
      processing an injection and jumping to the relevant handler), the physical
      interrupt is noticed and causes an exit.
      
      On recent Intel processors, we could have achieved the same goal by using
      MTF instead of a self-IPI. Another technique worth considering in the future
      is to use VM_EXIT_ACK_INTR_ON_EXIT and a highest-priority vector IPI - to
      slightly improve performance by avoiding the useless interrupt handler
      which ends up being called when smp_send_reschedule() is used.
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d6185f20
  2. 26 9月, 2011 4 次提交
    • A
      KVM: Fix simultaneous NMIs · 7460fb4a
      Avi Kivity 提交于
      If simultaneous NMIs happen, we're supposed to queue the second
      and next (collapsing them), but currently we sometimes collapse
      the second into the first.
      
      Fix by using a counter for pending NMIs instead of a bool; since
      the counter limit depends on whether the processor is currently
      in an NMI handler, which can only be checked in vcpu context
      (via the NMI mask), we add a new KVM_REQ_NMI to request recalculation
      of the counter.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      7460fb4a
    • J
      KVM: Clean up and extend rate-limited output · bd80158a
      Jan Kiszka 提交于
      The use of printk_ratelimit is discouraged, replace it with
      pr*_ratelimited or __ratelimit. While at it, convert remaining
      guest-triggerable printks to rate-limited variants.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      bd80158a
    • S
      KVM: Intelligent device lookup on I/O bus · 743eeb0b
      Sasha Levin 提交于
      Currently the method of dealing with an IO operation on a bus (PIO/MMIO)
      is to call the read or write callback for each device registered
      on the bus until we find a device which handles it.
      
      Since the number of devices on a bus can be significant due to ioeventfds
      and coalesced MMIO zones, this leads to a lot of overhead on each IO
      operation.
      
      Instead of registering devices, we now register ranges which points to
      a device. Lookup is done using an efficient bsearch instead of a linear
      search.
      
      Performance test was conducted by comparing exit count per second with
      200 ioeventfds created on one byte and the guest is trying to access a
      different byte continuously (triggering usermode exits).
      Before the patch the guest has achieved 259k exits per second, after the
      patch the guest does 274k exits per second.
      
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NSasha Levin <levinsasha928@gmail.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      743eeb0b
    • S
      KVM: Make coalesced mmio use a device per zone · 2b3c246a
      Sasha Levin 提交于
      This patch changes coalesced mmio to create one mmio device per
      zone instead of handling all zones in one device.
      
      Doing so enables us to take advantage of existing locking and prevents
      a race condition between coalesced mmio registration/unregistration
      and lookups.
      Suggested-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NSasha Levin <levinsasha928@gmail.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      2b3c246a
  3. 24 7月, 2011 1 次提交
  4. 14 7月, 2011 1 次提交
    • G
      KVM: Steal time implementation · c9aaa895
      Glauber Costa 提交于
      To implement steal time, we need the hypervisor to pass the guest
      information about how much time was spent running other processes
      outside the VM, while the vcpu had meaningful work to do - halt
      time does not count.
      
      This information is acquired through the run_delay field of
      delayacct/schedstats infrastructure, that counts time spent in a
      runqueue but not running.
      
      Steal time is a per-cpu information, so the traditional MSR-based
      infrastructure is used. A new msr, KVM_MSR_STEAL_TIME, holds the
      memory area address containing information about steal time
      
      This patch contains the hypervisor part of the steal time infrasructure,
      and can be backported independently of the guest portion.
      
      [avi, yongjie: export delayacct_on, to avoid build failures in some configs]
      Signed-off-by: NGlauber Costa <glommer@redhat.com>
      Tested-by: NEric B Munson <emunson@mgebm.net>
      CC: Rik van Riel <riel@redhat.com>
      CC: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Anthony Liguori <aliguori@us.ibm.com>
      Signed-off-by: NYongjie Ren <yongjie.ren@intel.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      c9aaa895
  5. 12 7月, 2011 1 次提交
  6. 22 5月, 2011 2 次提交
  7. 11 5月, 2011 4 次提交
  8. 18 3月, 2011 5 次提交
  9. 12 1月, 2011 13 次提交
  10. 24 10月, 2010 5 次提交