1. 01 4月, 2015 3 次提交
  2. 17 3月, 2015 3 次提交
  3. 06 3月, 2015 1 次提交
    • E
      KVM: s390: Allocate and save/restore vector registers · 68c55750
      Eric Farman 提交于
      Define and allocate space for both the host and guest views of
      the vector registers for a given vcpu.  The 32 vector registers
      occupy 128 bits each (512 bytes total), but architecturally are
      paired with 512 additional bytes of reserved space for future
      expansion.
      
      The kvm_sync_regs structs containing the registers are union'ed
      with 1024 bytes of padding in the common kvm_run struct.  The
      addition of 1024 bytes of new register information clearly exceeds
      the existing union, so an expansion of that padding is required.
      
      When changing environments, we need to appropriately save and
      restore the vector registers viewed by both the host and guest,
      into and out of the sync_regs space.
      
      The floating point registers overlay the upper half of vector
      registers 0-15, so there's a bit of data duplication here that
      needs to be carefully avoided.
      Signed-off-by: NEric Farman <farman@linux.vnet.ibm.com>
      Reviewed-by: NThomas Huth <thuth@linux.vnet.ibm.com>
      Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      68c55750
  4. 13 2月, 2015 1 次提交
    • L
      virtual: Documentation: simplify and generalize paravirt_ops.txt · a2e19991
      Luis R. Rodriguez 提交于
      The general documentation we have for pv_ops is currenty present
      on the IA64 docs, but since this documentation covers IA64 xen
      enablement and IA64 Xen support got ripped out a while ago
      through commit d52eefb4 present since v3.14-rc1 lets just
      simplify, generalize and move the pv_ops documentation to a
      shared place.
      
      Cc: Isaku Yamahata <yamahata@valinux.co.jp>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: virtualization@lists.linux-foundation.org
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: xen-devel@lists.xenproject.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NLuis R. Rodriguez <mcgrof@suse.com>
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      a2e19991
  5. 09 2月, 2015 1 次提交
  6. 23 1月, 2015 3 次提交
  7. 21 1月, 2015 2 次提交
  8. 11 1月, 2015 1 次提交
    • E
      KVM: arm/arm64: vgic: add init entry to VGIC KVM device · 065c0034
      Eric Auger 提交于
      Since the advent of VGIC dynamic initialization, this latter is
      initialized quite late on the first vcpu run or "on-demand", when
      injecting an IRQ or when the guest sets its registers.
      
      This initialization could be initiated explicitly much earlier
      by the users-space, as soon as it has provided the requested
      dimensioning parameters.
      
      This patch adds a new entry to the VGIC KVM device that allows
      the user to manually request the VGIC init:
      - a new KVM_DEV_ARM_VGIC_GRP_CTRL group is introduced.
      - Its first attribute is KVM_DEV_ARM_VGIC_CTRL_INIT
      
      The rationale behind introducing a group is to be able to add other
      controls later on, if needed.
      Signed-off-by: NEric Auger <eric.auger@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      065c0034
  9. 13 12月, 2014 3 次提交
  10. 20 11月, 2014 1 次提交
  11. 07 11月, 2014 1 次提交
  12. 03 11月, 2014 2 次提交
  13. 22 9月, 2014 1 次提交
  14. 19 9月, 2014 1 次提交
  15. 10 9月, 2014 2 次提交
  16. 03 9月, 2014 1 次提交
    • D
      kvm: fix potentially corrupt mmio cache · ee3d1570
      David Matlack 提交于
      vcpu exits and memslot mutations can run concurrently as long as the
      vcpu does not aquire the slots mutex. Thus it is theoretically possible
      for memslots to change underneath a vcpu that is handling an exit.
      
      If we increment the memslot generation number again after
      synchronize_srcu_expedited(), vcpus can safely cache memslot generation
      without maintaining a single rcu_dereference through an entire vm exit.
      And much of the x86/kvm code does not maintain a single rcu_dereference
      of the current memslots during each exit.
      
      We can prevent the following case:
      
         vcpu (CPU 0)                             | thread (CPU 1)
      --------------------------------------------+--------------------------
      1  vm exit                                  |
      2  srcu_read_unlock(&kvm->srcu)             |
      3  decide to cache something based on       |
           old memslots                           |
      4                                           | change memslots
                                                  | (increments generation)
      5                                           | synchronize_srcu(&kvm->srcu);
      6  retrieve generation # from new memslots  |
      7  tag cache with new memslot generation    |
      8  srcu_read_unlock(&kvm->srcu)             |
      ...                                         |
         <action based on cache occurs even       |
          though the caching decision was based   |
          on the old memslots>                    |
      ...                                         |
         <action *continues* to occur until next  |
          memslot generation change, which may    |
          be never>                               |
                                                  |
      
      By incrementing the generation after synchronizing with kvm->srcu readers,
      we ensure that the generation retrieved in (6) will become invalid soon
      after (8).
      
      Keeping the existing increment is not strictly necessary, but we
      do keep it and just move it for consistency from update_memslots to
      install_new_memslots.  It invalidates old cached MMIOs immediately,
      instead of having to wait for the end of synchronize_srcu_expedited,
      which makes the code more clearly correct in case CPU 1 is preempted
      right after synchronize_srcu() returns.
      
      To avoid halving the generation space in SPTEs, always presume that the
      low bit of the generation is zero when reconstructing a generation number
      out of an SPTE.  This effectively disables MMIO caching in SPTEs during
      the call to synchronize_srcu_expedited.  Using the low bit this way is
      somewhat like a seqcount---where the protected thing is a cache, and
      instead of retrying we can simply punt if we observe the low bit to be 1.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NDavid Matlack <dmatlack@google.com>
      Reviewed-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      ee3d1570
  17. 25 8月, 2014 1 次提交
  18. 29 7月, 2014 1 次提交
  19. 28 7月, 2014 4 次提交
    • A
      KVM: Allow KVM_CHECK_EXTENSION on the vm fd · 92b591a4
      Alexander Graf 提交于
      The KVM_CHECK_EXTENSION is only available on the kvm fd today. Unfortunately
      on PPC some of the capabilities change depending on the way a VM was created.
      
      So instead we need a way to expose capabilities as VM ioctl, so that we can
      see which VM type we're using (HV or PR). To enable this, add the
      KVM_CHECK_EXTENSION ioctl to our vm ioctl portfolio.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      92b591a4
    • A
      KVM: PPC: Book3S: Fix LPCR one_reg interface · a0840240
      Alexey Kardashevskiy 提交于
      Unfortunately, the LPCR got defined as a 32-bit register in the
      one_reg interface.  This is unfortunate because KVM allows userspace
      to control the DPFD (default prefetch depth) field, which is in the
      upper 32 bits.  The result is that DPFD always get set to 0, which
      reduces performance in the guest.
      
      We can't just change KVM_REG_PPC_LPCR to be a 64-bit register ID,
      since that would break existing userspace binaries.  Instead we define
      a new KVM_REG_PPC_LPCR_64 id which is 64-bit.  Userspace can still use
      the old KVM_REG_PPC_LPCR id, but it now only modifies those fields in
      the bottom 32 bits that userspace can modify (ILE, TC and AIL).
      If userspace uses the new KVM_REG_PPC_LPCR_64 id, it can modify DPFD
      as well.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a0840240
    • P
      KVM: PPC: Book3S: Allow only implemented hcalls to be enabled or disabled · ae2113a4
      Paul Mackerras 提交于
      This adds code to check that when the KVM_CAP_PPC_ENABLE_HCALL
      capability is used to enable or disable in-kernel handling of an
      hcall, that the hcall is actually implemented by the kernel.
      If not an EINVAL error is returned.
      
      This also checks the default-enabled list of hcalls and prints a
      warning if any hcall there is not actually implemented.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      ae2113a4
    • P
      KVM: PPC: Book3S: Controls for in-kernel sPAPR hypercall handling · 699a0ea0
      Paul Mackerras 提交于
      This provides a way for userspace controls which sPAPR hcalls get
      handled in the kernel.  Each hcall can be individually enabled or
      disabled for in-kernel handling, except for H_RTAS.  The exception
      for H_RTAS is because userspace can already control whether
      individual RTAS functions are handled in-kernel or not via the
      KVM_PPC_RTAS_DEFINE_TOKEN ioctl, and because the numeric value for
      H_RTAS is out of the normal sequence of hcall numbers.
      
      Hcalls are enabled or disabled using the KVM_ENABLE_CAP ioctl for the
      KVM_CAP_PPC_ENABLE_HCALL capability on the file descriptor for the VM.
      The args field of the struct kvm_enable_cap specifies the hcall number
      in args[0] and the enable/disable flag in args[1]; 0 means disable
      in-kernel handling (so that the hcall will always cause an exit to
      userspace) and 1 means enable.  Enabling or disabling in-kernel
      handling of an hcall is effective across the whole VM.
      
      The ability for KVM_ENABLE_CAP to be used on a VM file descriptor
      on PowerPC is new, added by this commit.  The KVM_CAP_ENABLE_CAP_VM
      capability advertises that this ability exists.
      
      When a VM is created, an initial set of hcalls are enabled for
      in-kernel handling.  The set that is enabled is the set that have
      an in-kernel implementation at this point.  Any new hcall
      implementations from this point onwards should not be added to the
      default set without a good reason.
      
      No distinction is made between real-mode and virtual-mode hcall
      implementations; the one setting controls them both.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      699a0ea0
  20. 21 7月, 2014 2 次提交
  21. 10 7月, 2014 5 次提交