1. 12 7月, 2011 5 次提交
    • P
      KVM: PPC: Allocate RMAs (Real Mode Areas) at boot for use by guests · aa04b4cc
      Paul Mackerras 提交于
      This adds infrastructure which will be needed to allow book3s_hv KVM to
      run on older POWER processors, including PPC970, which don't support
      the Virtual Real Mode Area (VRMA) facility, but only the Real Mode
      Offset (RMO) facility.  These processors require a physically
      contiguous, aligned area of memory for each guest.  When the guest does
      an access in real mode (MMU off), the address is compared against a
      limit value, and if it is lower, the address is ORed with an offset
      value (from the Real Mode Offset Register (RMOR)) and the result becomes
      the real address for the access.  The size of the RMA has to be one of
      a set of supported values, which usually includes 64MB, 128MB, 256MB
      and some larger powers of 2.
      
      Since we are unlikely to be able to allocate 64MB or more of physically
      contiguous memory after the kernel has been running for a while, we
      allocate a pool of RMAs at boot time using the bootmem allocator.  The
      size and number of the RMAs can be set using the kvm_rma_size=xx and
      kvm_rma_count=xx kernel command line options.
      
      KVM exports a new capability, KVM_CAP_PPC_RMA, to signal the availability
      of the pool of preallocated RMAs.  The capability value is 1 if the
      processor can use an RMA but doesn't require one (because it supports
      the VRMA facility), or 2 if the processor requires an RMA for each guest.
      
      This adds a new ioctl, KVM_ALLOCATE_RMA, which allocates an RMA from the
      pool and returns a file descriptor which can be used to map the RMA.  It
      also returns the size of the RMA in the argument structure.
      
      Having an RMA means we will get multiple KMV_SET_USER_MEMORY_REGION
      ioctl calls from userspace.  To cope with this, we now preallocate the
      kvm->arch.ram_pginfo array when the VM is created with a size sufficient
      for up to 64GB of guest memory.  Subsequently we will get rid of this
      array and use memory associated with each memslot instead.
      
      This moves most of the code that translates the user addresses into
      host pfns (page frame numbers) out of kvmppc_prepare_vrma up one level
      to kvmppc_core_prepare_memory_region.  Also, instead of having to look
      up the VMA for each page in order to check the page size, we now check
      that the pages we get are compound pages of 16MB.  However, if we are
      adding memory that is mapped to an RMA, we don't bother with calling
      get_user_pages_fast and instead just offset from the base pfn for the
      RMA.
      
      Typically the RMA gets added after vcpus are created, which makes it
      inconvenient to have the LPCR (logical partition control register) value
      in the vcpu->arch struct, since the LPCR controls whether the processor
      uses RMA or VRMA for the guest.  This moves the LPCR value into the
      kvm->arch struct and arranges for the MER (mediated external request)
      bit, which is the only bit that varies between vcpus, to be set in
      assembly code when going into the guest if there is a pending external
      interrupt request.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      aa04b4cc
    • P
      KVM: PPC: Allow book3s_hv guests to use SMT processor modes · 371fefd6
      Paul Mackerras 提交于
      This lifts the restriction that book3s_hv guests can only run one
      hardware thread per core, and allows them to use up to 4 threads
      per core on POWER7.  The host still has to run single-threaded.
      
      This capability is advertised to qemu through a new KVM_CAP_PPC_SMT
      capability.  The return value of the ioctl querying this capability
      is the number of vcpus per virtual CPU core (vcore), currently 4.
      
      To use this, the host kernel should be booted with all threads
      active, and then all the secondary threads should be offlined.
      This will put the secondary threads into nap mode.  KVM will then
      wake them from nap mode and use them for running guest code (while
      they are still offline).  To wake the secondary threads, we send
      them an IPI using a new xics_wake_cpu() function, implemented in
      arch/powerpc/sysdev/xics/icp-native.c.  In other words, at this stage
      we assume that the platform has a XICS interrupt controller and
      we are using icp-native.c to drive it.  Since the woken thread will
      need to acknowledge and clear the IPI, we also export the base
      physical address of the XICS registers using kvmppc_set_xics_phys()
      for use in the low-level KVM book3s code.
      
      When a vcpu is created, it is assigned to a virtual CPU core.
      The vcore number is obtained by dividing the vcpu number by the
      number of threads per core in the host.  This number is exported
      to userspace via the KVM_CAP_PPC_SMT capability.  If qemu wishes
      to run the guest in single-threaded mode, it should make all vcpu
      numbers be multiples of the number of threads per core.
      
      We distinguish three states of a vcpu: runnable (i.e., ready to execute
      the guest), blocked (that is, idle), and busy in host.  We currently
      implement a policy that the vcore can run only when all its threads
      are runnable or blocked.  This way, if a vcpu needs to execute elsewhere
      in the kernel or in qemu, it can do so without being starved of CPU
      by the other vcpus.
      
      When a vcore starts to run, it executes in the context of one of the
      vcpu threads.  The other vcpu threads all go to sleep and stay asleep
      until something happens requiring the vcpu thread to return to qemu,
      or to wake up to run the vcore (this can happen when another vcpu
      thread goes from busy in host state to blocked).
      
      It can happen that a vcpu goes from blocked to runnable state (e.g.
      because of an interrupt), and the vcore it belongs to is already
      running.  In that case it can start to run immediately as long as
      the none of the vcpus in the vcore have started to exit the guest.
      We send the next free thread in the vcore an IPI to get it to start
      to execute the guest.  It synchronizes with the other threads via
      the vcore->entry_exit_count field to make sure that it doesn't go
      into the guest if the other vcpus are exiting by the time that it
      is ready to actually enter the guest.
      
      Note that there is no fixed relationship between the hardware thread
      number and the vcpu number.  Hardware threads are assigned to vcpus
      as they become runnable, so we will always use the lower-numbered
      hardware threads in preference to higher-numbered threads if not all
      the vcpus in the vcore are runnable, regardless of which vcpus are
      runnable.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      371fefd6
    • D
      KVM: PPC: Accelerate H_PUT_TCE by implementing it in real mode · 54738c09
      David Gibson 提交于
      This improves I/O performance for guests using the PAPR
      paravirtualization interface by making the H_PUT_TCE hcall faster, by
      implementing it in real mode.  H_PUT_TCE is used for updating virtual
      IOMMU tables, and is used both for virtual I/O and for real I/O in the
      PAPR interface.
      
      Since this moves the IOMMU tables into the kernel, we define a new
      KVM_CREATE_SPAPR_TCE ioctl to allow qemu to create the tables.  The
      ioctl returns a file descriptor which can be used to mmap the newly
      created table.  The qemu driver models use them in the same way as
      userspace managed tables, but they can be updated directly by the
      guest with a real-mode H_PUT_TCE implementation, reducing the number
      of host/guest context switches during guest IO.
      
      There are certain circumstances where it is useful for userland qemu
      to write to the TCE table even if the kernel H_PUT_TCE path is used
      most of the time.  Specifically, allowing this will avoid awkwardness
      when we need to reset the table.  More importantly, we will in the
      future need to write the table in order to restore its state after a
      checkpoint resume or migration.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      54738c09
    • P
      KVM: PPC: Add support for Book3S processors in hypervisor mode · de56a948
      Paul Mackerras 提交于
      This adds support for KVM running on 64-bit Book 3S processors,
      specifically POWER7, in hypervisor mode.  Using hypervisor mode means
      that the guest can use the processor's supervisor mode.  That means
      that the guest can execute privileged instructions and access privileged
      registers itself without trapping to the host.  This gives excellent
      performance, but does mean that KVM cannot emulate a processor
      architecture other than the one that the hardware implements.
      
      This code assumes that the guest is running paravirtualized using the
      PAPR (Power Architecture Platform Requirements) interface, which is the
      interface that IBM's PowerVM hypervisor uses.  That means that existing
      Linux distributions that run on IBM pSeries machines will also run
      under KVM without modification.  In order to communicate the PAPR
      hypercalls to qemu, this adds a new KVM_EXIT_PAPR_HCALL exit code
      to include/linux/kvm.h.
      
      Currently the choice between book3s_hv support and book3s_pr support
      (i.e. the existing code, which runs the guest in user mode) has to be
      made at kernel configuration time, so a given kernel binary can only
      do one or the other.
      
      This new book3s_hv code doesn't support MMIO emulation at present.
      Since we are running paravirtualized guests, this isn't a serious
      restriction.
      
      With the guest running in supervisor mode, most exceptions go straight
      to the guest.  We will never get data or instruction storage or segment
      interrupts, alignment interrupts, decrementer interrupts, program
      interrupts, single-step interrupts, etc., coming to the hypervisor from
      the guest.  Therefore this introduces a new KVMTEST_NONHV macro for the
      exception entry path so that we don't have to do the KVM test on entry
      to those exception handlers.
      
      We do however get hypervisor decrementer, hypervisor data storage,
      hypervisor instruction storage, and hypervisor emulation assist
      interrupts, so we have to handle those.
      
      In hypervisor mode, real-mode accesses can access all of RAM, not just
      a limited amount.  Therefore we put all the guest state in the vcpu.arch
      and use the shadow_vcpu in the PACA only for temporary scratch space.
      We allocate the vcpu with kzalloc rather than vzalloc, and we don't use
      anything in the kvmppc_vcpu_book3s struct, so we don't allocate it.
      We don't have a shared page with the guest, but we still need a
      kvm_vcpu_arch_shared struct to store the values of various registers,
      so we include one in the vcpu_arch struct.
      
      The POWER7 processor has a restriction that all threads in a core have
      to be in the same partition.  MMU-on kernel code counts as a partition
      (partition 0), so we have to do a partition switch on every entry to and
      exit from the guest.  At present we require the host and guest to run
      in single-thread mode because of this hardware restriction.
      
      This code allocates a hashed page table for the guest and initializes
      it with HPTEs for the guest's Virtual Real Memory Area (VRMA).  We
      require that the guest memory is allocated using 16MB huge pages, in
      order to simplify the low-level memory management.  This also means that
      we can get away without tracking paging activity in the host for now,
      since huge pages can't be paged or swapped.
      
      This also adds a few new exports needed by the book3s_hv code.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      de56a948
    • J
      KVM: Clarify KVM_ASSIGN_PCI_DEVICE documentation · 91e3d71d
      Jan Kiszka 提交于
      Neither host_irq nor the guest_msi struct are used anymore today.
      Tag the former, drop the latter to avoid confusion.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      91e3d71d
  2. 22 5月, 2011 1 次提交
  3. 11 5月, 2011 1 次提交
  4. 12 1月, 2011 1 次提交
  5. 24 10月, 2010 2 次提交
  6. 01 8月, 2010 2 次提交
  7. 17 5月, 2010 3 次提交
  8. 25 4月, 2010 3 次提交
  9. 01 3月, 2010 6 次提交
  10. 08 12月, 2009 1 次提交
  11. 03 12月, 2009 7 次提交
    • C
      KVM: s390: Make psw available on all exits, not just a subset · d7b0b5eb
      Carsten Otte 提交于
      This patch moves s390 processor status word into the base kvm_run
      struct and keeps it up-to date on all userspace exits.
      
      The userspace ABI is broken by this, however there are no applications
      in the wild using this.  A capability check is provided so users can
      verify the updated API exists.
      
      Cc: stable@kernel.org
      Signed-off-by: NCarsten Otte <cotte@de.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d7b0b5eb
    • J
      KVM: x86: Add KVM_GET/SET_VCPU_EVENTS · 3cfc3092
      Jan Kiszka 提交于
      This new IOCTL exports all yet user-invisible states related to
      exceptions, interrupts, and NMIs. Together with appropriate user space
      changes, this fixes sporadic problems of vmsave/restore, live migration
      and system reset.
      
      [avi: future-proof abi by adding a flags field]
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      3cfc3092
    • A
      KVM: VMX: Report unexpected simultaneous exceptions as internal errors · 65ac7264
      Avi Kivity 提交于
      These happen when we trap an exception when another exception is being
      delivered; we only expect these with MCEs and page faults.  If something
      unexpected happens, things probably went south and we're better off reporting
      an internal error and freezing.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      65ac7264
    • A
      KVM: Allow internal errors reported to userspace to carry extra data · a9c7399d
      Avi Kivity 提交于
      Usually userspace will freeze the guest so we can inspect it, but some
      internal state is not available.  Add extra data to internal error
      reporting so we can expose it to the debugger.  Extra data is specific
      to the suberror.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      a9c7399d
    • J
      KVM: Reorder IOCTLs in main kvm.h · c54d2aba
      Jan Kiszka 提交于
      Obviously, people tend to extend this header at the bottom - more or
      less blindly. Ensure that deprecated stuff gets its own corner again by
      moving things to the top. Also add some comments and reindent IOCTLs to
      make them more readable and reduce the risk of number collisions.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      c54d2aba
    • G
      KVM: allow userspace to adjust kvmclock offset · afbcf7ab
      Glauber Costa 提交于
      When we migrate a kvm guest that uses pvclock between two hosts, we may
      suffer a large skew. This is because there can be significant differences
      between the monotonic clock of the hosts involved. When a new host with
      a much larger monotonic time starts running the guest, the view of time
      will be significantly impacted.
      
      Situation is much worse when we do the opposite, and migrate to a host with
      a smaller monotonic clock.
      
      This proposed ioctl will allow userspace to inform us what is the monotonic
      clock value in the source host, so we can keep the time skew short, and
      more importantly, never goes backwards. Userspace may also need to trigger
      the current data, since from the first migration onwards, it won't be
      reflected by a simple call to clock_gettime() anymore.
      
      [marcelo: future-proof abi with a flags field]
      [jan: fix KVM_GET_CLOCK by clearing flags field instead of checking it]
      Signed-off-by: NGlauber Costa <glommer@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      afbcf7ab
    • E
      KVM: Xen PV-on-HVM guest support · ffde22ac
      Ed Swierk 提交于
      Support for Xen PV-on-HVM guests can be implemented almost entirely in
      userspace, except for handling one annoying MSR that maps a Xen
      hypercall blob into guest address space.
      
      A generic mechanism to delegate MSR writes to userspace seems overkill
      and risks encouraging similar MSR abuse in the future.  Thus this patch
      adds special support for the Xen HVM MSR.
      
      I implemented a new ioctl, KVM_XEN_HVM_CONFIG, that lets userspace tell
      KVM which MSR the guest will write to, as well as the starting address
      and size of the hypercall blobs (one each for 32-bit and 64-bit) that
      userspace has loaded from files.  When the guest writes to the MSR, KVM
      copies one page of the blob from userspace to the guest.
      
      I've tested this patch with a hacked-up version of Gerd's userspace
      code, booting a number of guests (CentOS 5.3 i386 and x86_64, and
      FreeBSD 8.0-RC1 amd64) and exercising PV network and block devices.
      
      [jan: fix i386 build warning]
      [avi: future proof abi with a flags field]
      Signed-off-by: NEd Swierk <eswierk@aristanetworks.com>
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      ffde22ac
  12. 10 9月, 2009 8 次提交
    • S
      KVM: VMX: Introduce KVM_SET_IDENTITY_MAP_ADDR ioctl · b927a3ce
      Sheng Yang 提交于
      Now KVM allow guest to modify guest's physical address of EPT's identity mapping page.
      
      (change from v1, discard unnecessary check, change ioctl to accept parameter
      address rather than value)
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      b927a3ce
    • G
      KVM: add ioeventfd support · d34e6b17
      Gregory Haskins 提交于
      ioeventfd is a mechanism to register PIO/MMIO regions to trigger an eventfd
      signal when written to by a guest.  Host userspace can register any
      arbitrary IO address with a corresponding eventfd and then pass the eventfd
      to a specific end-point of interest for handling.
      
      Normal IO requires a blocking round-trip since the operation may cause
      side-effects in the emulated model or may return data to the caller.
      Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM
      "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's
      device model synchronously before returning control back to the vcpu.
      
      However, there is a subclass of IO which acts purely as a trigger for
      other IO (such as to kick off an out-of-band DMA request, etc).  For these
      patterns, the synchronous call is particularly expensive since we really
      only want to simply get our notification transmitted asychronously and
      return as quickly as possible.  All the sychronous infrastructure to ensure
      proper data-dependencies are met in the normal IO case are just unecessary
      overhead for signalling.  This adds additional computational load on the
      system, as well as latency to the signalling path.
      
      Therefore, we provide a mechanism for registration of an in-kernel trigger
      point that allows the VCPU to only require a very brief, lightweight
      exit just long enough to signal an eventfd.  This also means that any
      clients compatible with the eventfd interface (which includes userspace
      and kernelspace equally well) can now register to be notified. The end
      result should be a more flexible and higher performance notification API
      for the backend KVM hypervisor and perhipheral components.
      
      To test this theory, we built a test-harness called "doorbell".  This
      module has a function called "doorbell_ring()" which simply increments a
      counter for each time the doorbell is signaled.  It supports signalling
      from either an eventfd, or an ioctl().
      
      We then wired up two paths to the doorbell: One via QEMU via a registered
      io region and through the doorbell ioctl().  The other is direct via
      ioeventfd.
      
      You can download this test harness here:
      
      ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2
      
      The measured results are as follows:
      
      qemu-mmio:       110000 iops, 9.09us rtt
      ioeventfd-mmio: 200100 iops, 5.00us rtt
      ioeventfd-pio:  367300 iops, 2.72us rtt
      
      I didn't measure qemu-pio, because I have to figure out how to register a
      PIO region with qemu's device model, and I got lazy.  However, for now we
      can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO,
      and -350ns for HC, we get:
      
      qemu-pio:      153139 iops, 6.53us rtt
      ioeventfd-hc: 412585 iops, 2.37us rtt
      
      these are just for fun, for now, until I can gather more data.
      
      Here is a graph for your convenience:
      
      http://developer.novell.com/wiki/images/7/76/Iofd-chart.png
      
      The conclusion to draw is that we save about 4us by skipping the userspace
      hop.
      
      --------------------
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d34e6b17
    • B
      KVM: PIT support for HPET legacy mode · e9f42757
      Beth Kon 提交于
      When kvm is in hpet_legacy_mode, the hpet is providing the timer
      interrupt and the pit should not be. So in legacy mode, the pit timer
      is destroyed, but the *state* of the pit is maintained. So if kvm or
      the guest tries to modify the state of the pit, this modification is
      accepted, *except* that the timer isn't actually started. When we exit
      hpet_legacy_mode, the current state of the pit (which is up to date
      since we've been accepting modifications) is used to restart the pit
      timer.
      
      The saved_mode code in kvm_pit_load_count temporarily changes mode to
      0xff in order to destroy the timer, but then restores the actual
      value, again maintaining "current" state of the pit for possible later
      reenablement.
      
      [avi: add some reserved storage in the ioctl; make SET_PIT2 IOW]
      [marcelo: fix memory corruption due to reserved storage]
      Signed-off-by: NBeth Kon <eak@us.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      e9f42757
    • M
      KVM: remove old KVMTRACE support code · 2023a29c
      Marcelo Tosatti 提交于
      Return EOPNOTSUPP for KVM_TRACE_ENABLE/PAUSE/DISABLE ioctls.
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      2023a29c
    • A
      KVM: Return to userspace on emulation failure · 3f5d18a9
      Avi Kivity 提交于
      Instead of mindlessly retrying to execute the instruction, report the
      failure to userspace.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      3f5d18a9
    • G
      KVM: Break dependency between vcpu index in vcpus array and vcpu_id. · 73880c80
      Gleb Natapov 提交于
      Archs are free to use vcpu_id as they see fit. For x86 it is used as
      vcpu's apic id. New ioctl is added to configure boot vcpu id that was
      assumed to be 0 till now.
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      73880c80
    • A
      KVM: Reorder ioctls in kvm.h · 6a4a9839
      Avi Kivity 提交于
      Somehow the VM ioctls got unsorted; resort.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      6a4a9839
    • S
      KVM: Downsize max support MSI-X entry to 256 · e7333391
      Sheng Yang 提交于
      We only trap one page for MSI-X entry now, so it's 4k/(128/8) = 256 entries at
      most.
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      e7333391