1. 09 1月, 2014 1 次提交
  2. 18 10月, 2013 1 次提交
  3. 17 10月, 2013 14 次提交
    • A
      kvm: powerpc: book3s: Add a new config variable CONFIG_KVM_BOOK3S_HV_POSSIBLE · 9975f5e3
      Aneesh Kumar K.V 提交于
      This help ups to select the relevant code in the kernel code
      when we later move HV and PR bits as seperate modules. The patch
      also makes the config options for PR KVM selectable
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9975f5e3
    • A
      kvm: powerpc: book3s: pr: Rename KVM_BOOK3S_PR to KVM_BOOK3S_PR_POSSIBLE · 7aa79938
      Aneesh Kumar K.V 提交于
      With later patches supporting PR kvm as a kernel module, the changes
      that has to be built into the main kernel binary to enable PR KVM module
      is now selected via KVM_BOOK3S_PR_POSSIBLE
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      7aa79938
    • B
      KVM: PPC: E500: Add userspace debug stub support · ce11e48b
      Bharat Bhushan 提交于
      This patch adds the debug stub support on booke/bookehv.
      Now QEMU debug stub can use hw breakpoint, watchpoint and
      software breakpoint to debug guest.
      
      This is how we save/restore debug register context when switching
      between guest, userspace and kernel user-process:
      
      When QEMU is running
       -> thread->debug_reg == QEMU debug register context.
       -> Kernel will handle switching the debug register on context switch.
       -> no vcpu_load() called
      
      QEMU makes ioctls (except RUN)
       -> This will call vcpu_load()
       -> should not change context.
       -> Some ioctls can change vcpu debug register, context saved in vcpu->debug_regs
      
      QEMU Makes RUN ioctl
       -> Save thread->debug_reg on STACK
       -> Store thread->debug_reg == vcpu->debug_reg
       -> load thread->debug_reg
       -> RUN VCPU ( So thread points to vcpu context )
      
      Context switch happens When VCPU running
       -> makes vcpu_load() should not load any context
       -> kernel loads the vcpu context as thread->debug_regs points to vcpu context.
      
      On heavyweight_exit
       -> Load the context saved on stack in thread->debug_reg
      
      Currently we do not support debug resource emulation to guest,
      On debug exception, always exit to user space irrespective of
      user space is expecting the debug exception or not. If this is
      unexpected exception (breakpoint/watchpoint event not set by
      userspace) then let us leave the action on user space. This
      is similar to what it was before, only thing is that now we
      have proper exit state available to user space.
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      ce11e48b
    • B
      KVM: PPC: E500: Using "struct debug_reg" · 547465ef
      Bharat Bhushan 提交于
      For KVM also use the "struct debug_reg" defined in asm/processor.h
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      547465ef
    • P
      KVM: PPC: Book3S PR: Better handling of host-side read-only pages · 93b159b4
      Paul Mackerras 提交于
      Currently we request write access to all pages that get mapped into the
      guest, even if the guest is only loading from the page.  This reduces
      the effectiveness of KSM because it means that we unshare every page we
      access.  Also, we always set the changed (C) bit in the guest HPTE if
      it allows writing, even for a guest load.
      
      This fixes both these problems.  We pass an 'iswrite' flag to the
      mmu.xlate() functions and to kvmppc_mmu_map_page() to indicate whether
      the access is a load or a store.  The mmu.xlate() functions now only
      set C for stores.  kvmppc_gfn_to_pfn() now calls gfn_to_pfn_prot()
      instead of gfn_to_pfn() so that it can indicate whether we need write
      access to the page, and get back a 'writable' flag to indicate whether
      the page is writable or not.  If that 'writable' flag is clear, we then
      make the host HPTE read-only even if the guest HPTE allowed writing.
      
      This means that we can get a protection fault when the guest writes to a
      page that it has mapped read-write but which is read-only on the host
      side (perhaps due to KSM having merged the page).  Thus we now call
      kvmppc_handle_pagefault() for protection faults as well as HPTE not found
      faults.  In kvmppc_handle_pagefault(), if the access was allowed by the
      guest HPTE and we thus need to install a new host HPTE, we then need to
      remove the old host HPTE if there is one.  This is done with a new
      function, kvmppc_mmu_unmap_page(), which uses kvmppc_mmu_pte_vflush() to
      find and remove the old host HPTE.
      
      Since the memslot-related functions require the KVM SRCU read lock to
      be held, this adds srcu_read_lock/unlock pairs around the calls to
      kvmppc_handle_pagefault().
      
      Finally, this changes kvmppc_mmu_book3s_32_xlate_pte() to not ignore
      guest HPTEs that don't permit access, and to return -EPERM for accesses
      that are not permitted by the page protections.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      93b159b4
    • P
      KVM: PPC: Book3S PR: Allocate kvm_vcpu structs from kvm_vcpu_cache · 3ff95502
      Paul Mackerras 提交于
      This makes PR KVM allocate its kvm_vcpu structs from the kvm_vcpu_cache
      rather than having them embedded in the kvmppc_vcpu_book3s struct,
      which is allocated with vzalloc.  The reason is to reduce the
      differences between PR and HV KVM in order to make is easier to have
      them coexist in one kernel binary.
      
      With this, the kvm_vcpu struct has a pointer to the kvmppc_vcpu_book3s
      struct.  The pointer to the kvmppc_book3s_shadow_vcpu struct has moved
      from the kvmppc_vcpu_book3s struct to the kvm_vcpu struct, and is only
      present for 32-bit, since it is only used for 32-bit.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      [agraf: squash in compile fix from Aneesh]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3ff95502
    • P
      KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe · 9308ab8e
      Paul Mackerras 提交于
      This adds a per-VM mutex to provide mutual exclusion between vcpus
      for accesses to and updates of the guest hashed page table (HPT).
      This also makes the code use single-byte writes to the HPT entry
      when updating of the reference (R) and change (C) bits.  The reason
      for doing this, rather than writing back the whole HPTE, is that on
      non-PAPR virtual machines, the guest OS might be writing to the HPTE
      concurrently, and writing back the whole HPTE might conflict with
      that.  Also, real hardware does single-byte writes to update R and C.
      
      The new mutex is taken in kvmppc_mmu_book3s_64_xlate() when reading
      the HPT and updating R and/or C, and in the PAPR HPT update hcalls
      (H_ENTER, H_REMOVE, etc.).  Having the mutex means that we don't need
      to use a hypervisor lock bit in the HPT update hcalls, and we don't
      need to be careful about the order in which the bytes of the HPTE are
      updated by those hcalls.
      
      The other change here is to make emulated TLB invalidations (tlbie)
      effective across all vcpus.  To do this we call kvmppc_mmu_pte_vflush
      for all vcpus in kvmppc_ppc_book3s_64_tlbie().
      
      For 32-bit, this makes the setting of the accessed and dirty bits use
      single-byte writes, and makes tlbie invalidate shadow HPTEs for all
      vcpus.
      
      With this, PR KVM can successfully run SMP guests.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9308ab8e
    • P
      KVM: PPC: Book3S PR: Allow guest to use 64k pages · a4a0f252
      Paul Mackerras 提交于
      This adds the code to interpret 64k HPTEs in the guest hashed page
      table (HPT), 64k SLB entries, and to tell the guest about 64k pages
      in kvm_vm_ioctl_get_smmu_info().  Guest 64k pages are still shadowed
      by 4k pages.
      
      This also adds another hash table to the four we have already in
      book3s_mmu_hpte.c to allow us to find all the PTEs that we have
      instantiated that match a given 64k guest page.
      
      The tlbie instruction changed starting with POWER6 to use a bit in
      the RB operand to indicate large page invalidations, and to use other
      RB bits to indicate the base and actual page sizes and the segment
      size.  64k pages came in slightly earlier, with POWER5++.
      We use one bit in vcpu->arch.hflags to indicate that the emulated
      cpu supports 64k pages, and another to indicate that it has the new
      tlbie definition.
      
      The KVM_PPC_GET_SMMU_INFO ioctl presents a bit of a problem, because
      the MMU capabilities depend on which CPU model we're emulating, but it
      is a VM ioctl not a VCPU ioctl and therefore doesn't get passed a VCPU
      fd.  In addition, commonly-used userspace (QEMU) calls it before
      setting the PVR for any VCPU.  Therefore, as a best effort we look at
      the first vcpu in the VM and return 64k pages or not depending on its
      capabilities.  We also make the PVR default to the host PVR on recent
      CPUs that support 1TB segments (and therefore multiple page sizes as
      well) so that KVM_PPC_GET_SMMU_INFO will include 64k page and 1TB
      segment support on those CPUs.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a4a0f252
    • P
      KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu · a2d56020
      Paul Mackerras 提交于
      Currently PR-style KVM keeps the volatile guest register values
      (R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
      the main kvm_vcpu struct.  For 64-bit, the shadow_vcpu exists in two
      places, a kmalloc'd struct and in the PACA, and it gets copied back
      and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
      can't rely on being able to access the kmalloc'd struct.
      
      This changes the code to copy the volatile values into the shadow_vcpu
      as one of the last things done before entering the guest.  Similarly
      the values are copied back out of the shadow_vcpu to the kvm_vcpu
      immediately after exiting the guest.  We arrange for interrupts to be
      still disabled at this point so that we can't get preempted on 64-bit
      and end up copying values from the wrong PACA.
      
      This means that the accessor functions in kvm_book3s.h for these
      registers are greatly simplified, and are same between PR and HV KVM.
      In places where accesses to shadow_vcpu fields are now replaced by
      accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
      Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
      
      With this, the time to read the PVR one million times in a loop went
      from 567.7ms to 575.5ms (averages of 6 values), an increase of about
      1.4% for this worse-case test for guest entries and exits.  The
      standard deviation of the measurements is about 11ms, so the
      difference is only marginally significant statistically.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a2d56020
    • P
      KVM: PPC: Book3S HV: Support POWER6 compatibility mode on POWER7 · 388cc6e1
      Paul Mackerras 提交于
      This enables us to use the Processor Compatibility Register (PCR) on
      POWER7 to put the processor into architecture 2.05 compatibility mode
      when running a guest.  In this mode the new instructions and registers
      that were introduced on POWER7 are disabled in user mode.  This
      includes all the VSX facilities plus several other instructions such
      as ldbrx, stdbrx, popcntw, popcntd, etc.
      
      To select this mode, we have a new register accessible through the
      set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT.  Setting
      this to zero gives the full set of capabilities of the processor.
      Setting it to one of the "logical" PVR values defined in PAPR puts
      the vcpu into the compatibility mode for the corresponding
      architecture level.  The supported values are:
      
      0x0f000002	Architecture 2.05 (POWER6)
      0x0f000003	Architecture 2.06 (POWER7)
      0x0f100003	Architecture 2.06+ (POWER7+)
      
      Since the PCR is per-core, the architecture compatibility level and
      the corresponding PCR value are stored in the struct kvmppc_vcore, and
      are therefore shared between all vcpus in a virtual core.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      [agraf: squash in fix to add missing break statements and documentation]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      388cc6e1
    • P
      KVM: PPC: Book3S HV: Add support for guest Program Priority Register · 4b8473c9
      Paul Mackerras 提交于
      POWER7 and later IBM server processors have a register called the
      Program Priority Register (PPR), which controls the priority of
      each hardware CPU SMT thread, and affects how fast it runs compared
      to other SMT threads.  This priority can be controlled by writing to
      the PPR or by use of a set of instructions of the form or rN,rN,rN
      which are otherwise no-ops but have been defined to set the priority
      to particular levels.
      
      This adds code to context switch the PPR when entering and exiting
      guests and to make the PPR value accessible through the SET/GET_ONE_REG
      interface.  When entering the guest, we set the PPR as late as
      possible, because if we are setting a low thread priority it will
      make the code run slowly from that point on.  Similarly, the
      first-level interrupt handlers save the PPR value in the PACA very
      early on, and set the thread priority to the medium level, so that
      the interrupt handling code runs at a reasonable speed.
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      4b8473c9
    • P
      KVM: PPC: Book3S HV: Store LPCR value for each virtual core · a0144e2a
      Paul Mackerras 提交于
      This adds the ability to have a separate LPCR (Logical Partitioning
      Control Register) value relating to a guest for each virtual core,
      rather than only having a single value for the whole VM.  This
      corresponds to what real POWER hardware does, where there is a LPCR
      per CPU thread but most of the fields are required to have the same
      value on all active threads in a core.
      
      The per-virtual-core LPCR can be read and written using the
      GET/SET_ONE_REG interface.  Userspace can can only modify the
      following fields of the LPCR value:
      
      DPFD	Default prefetch depth
      ILE	Interrupt little-endian
      TC	Translation control (secondary HPT hash group search disable)
      
      We still maintain a per-VM default LPCR value in kvm->arch.lpcr, which
      contains bits relating to memory management, i.e. the Virtualized
      Partition Memory (VPM) bits and the bits relating to guest real mode.
      When this default value is updated, the update needs to be propagated
      to the per-vcore values, so we add a kvmppc_update_lpcr() helper to do
      that.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      [agraf: fix whitespace]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a0144e2a
    • P
      KVM: PPC: Book3S HV: Implement timebase offset for guests · 93b0f4dc
      Paul Mackerras 提交于
      This allows guests to have a different timebase origin from the host.
      This is needed for migration, where a guest can migrate from one host
      to another and the two hosts might have a different timebase origin.
      However, the timebase seen by the guest must not go backwards, and
      should go forwards only by a small amount corresponding to the time
      taken for the migration.
      
      Therefore this provides a new per-vcpu value accessed via the one_reg
      interface using the new KVM_REG_PPC_TB_OFFSET identifier.  This value
      defaults to 0 and is not modified by KVM.  On entering the guest, this
      value is added onto the timebase, and on exiting the guest, it is
      subtracted from the timebase.
      
      This is only supported for recent POWER hardware which has the TBU40
      (timebase upper 40 bits) register.  Writing to the TBU40 register only
      alters the upper 40 bits of the timebase, leaving the lower 24 bits
      unchanged.  This provides a way to modify the timebase for guest
      migration without disturbing the synchronization of the timebase
      registers across CPU cores.  The kernel rounds up the value given
      to a multiple of 2^24.
      
      Timebase values stored in KVM structures (struct kvm_vcpu, struct
      kvmppc_vcore, etc.) are stored as host timebase values.  The timebase
      values in the dispatch trace log need to be guest timebase values,
      however, since that is read directly by the guest.  This moves the
      setting of vcpu->arch.dec_expires on guest exit to a point after we
      have restored the host timebase so that vcpu->arch.dec_expires is a
      host timebase value.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      93b0f4dc
    • P
      KVM: PPC: Book3S HV: Save/restore SIAR and SDAR along with other PMU registers · 14941789
      Paul Mackerras 提交于
      Currently we are not saving and restoring the SIAR and SDAR registers in
      the PMU (performance monitor unit) on guest entry and exit.  The result
      is that performance monitoring tools in the guest could get false
      information about where a program was executing and what data it was
      accessing at the time of a performance monitor interrupt.  This fixes
      it by saving and restoring these registers along with the other PMU
      registers on guest entry/exit.
      
      This also provides a way for userspace to access these values for a
      vcpu via the one_reg interface.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      14941789
  4. 14 10月, 2013 1 次提交
  5. 08 7月, 2013 2 次提交
  6. 27 4月, 2013 8 次提交
    • B
      KVM: PPC: Book3S: Add kernel emulation for the XICS interrupt controller · bc5ad3f3
      Benjamin Herrenschmidt 提交于
      This adds in-kernel emulation of the XICS (eXternal Interrupt
      Controller Specification) interrupt controller specified by PAPR, for
      both HV and PR KVM guests.
      
      The XICS emulation supports up to 1048560 interrupt sources.
      Interrupt source numbers below 16 are reserved; 0 is used to mean no
      interrupt and 2 is used for IPIs.  Internally these are represented in
      blocks of 1024, called ICS (interrupt controller source) entities, but
      that is not visible to userspace.
      
      Each vcpu gets one ICP (interrupt controller presentation) entity,
      used to store the per-vcpu state such as vcpu priority, pending
      interrupt state, IPI request, etc.
      
      This does not include any API or any way to connect vcpus to their
      ICP state; that will be added in later patches.
      
      This is based on an initial implementation by Michael Ellerman
      <michael@ellerman.id.au> reworked by Benjamin Herrenschmidt and
      Paul Mackerras.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      [agraf: fix typo, add dependency on !KVM_MPIC]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      bc5ad3f3
    • M
      KVM: PPC: Book3S: Add infrastructure to implement kernel-side RTAS calls · 8e591cb7
      Michael Ellerman 提交于
      For pseries machine emulation, in order to move the interrupt
      controller code to the kernel, we need to intercept some RTAS
      calls in the kernel itself.  This adds an infrastructure to allow
      in-kernel handlers to be registered for RTAS services by name.
      A new ioctl, KVM_PPC_RTAS_DEFINE_TOKEN, then allows userspace to
      associate token values with those service names.  Then, when the
      guest requests an RTAS service with one of those token values, it
      will be handled by the relevant in-kernel handler rather than being
      passed up to userspace as at present.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      [agraf: fix warning]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      8e591cb7
    • A
      KVM: PPC: Support irq routing and irqfd for in-kernel MPIC · de9ba2f3
      Alexander Graf 提交于
      Now that all the irq routing and irqfd pieces are generic, we can expose
      real irqchip support to all of KVM's internal helpers.
      
      This allows us to use irqfd with the in-kernel MPIC.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      de9ba2f3
    • S
      kvm/ppc/mpic: add KVM_CAP_IRQ_MPIC · eb1e4f43
      Scott Wood 提交于
      Enabling this capability connects the vcpu to the designated in-kernel
      MPIC.  Using explicit connections between vcpus and irqchips allows
      for flexibility, but the main benefit at the moment is that it
      simplifies the code -- KVM doesn't need vm-global state to remember
      which MPIC object is associated with this vm, and it doesn't need to
      care about ordering between irqchip creation and vcpu creation.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      [agraf: add stub functions for kvmppc_mpic_{dis,}connect_vcpu]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      eb1e4f43
    • S
      kvm/ppc/mpic: in-kernel MPIC emulation · 5df554ad
      Scott Wood 提交于
      Hook the MPIC code up to the KVM interfaces, add locking, etc.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      [agraf: add stub function for kvmppc_mpic_set_epr, non-booke, 64bit]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      5df554ad
    • P
      KVM: PPC: Book3S HV: Report VPA and DTL modifications in dirty map · c35635ef
      Paul Mackerras 提交于
      At present, the KVM_GET_DIRTY_LOG ioctl doesn't report modifications
      done by the host to the virtual processor areas (VPAs) and dispatch
      trace logs (DTLs) registered by the guest.  This is because those
      modifications are done either in real mode or in the host kernel
      context, and in neither case does the access go through the guest's
      HPT, and thus no change (C) bit gets set in the guest's HPT.
      
      However, the changes done by the host do need to be tracked so that
      the modified pages get transferred when doing live migration.  In
      order to track these modifications, this adds a dirty flag to the
      struct representing the VPA/DTL areas, and arranges to set the flag
      when the VPA/DTL gets modified by the host.  Then, when we are
      collecting the dirty log, we also check the dirty flags for the
      VPA and DTL for each vcpu and set the relevant bit in the dirty log
      if necessary.  Doing this also means we now need to keep track of
      the guest physical address of the VPA/DTL areas.
      
      So as not to lose track of modifications to a VPA/DTL area when it gets
      unregistered, or when a new area gets registered in its place, we need
      to transfer the dirty state to the rmap chain.  This adds code to
      kvmppc_unpin_guest_page() to do that if the area was dirty.  To simplify
      that code, we now require that all VPA, DTL and SLB shadow buffer areas
      fit within a single host page.  Guests already comply with this
      requirement because pHyp requires that these areas not cross a 4k
      boundary.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c35635ef
    • M
      KVM: PPC: e500: Add support for EPTCFG register · 9a6061d7
      Mihai Caraman 提交于
      EPTCFG register defined by E.PT is accessed unconditionally by Linux guests
      in the presence of MAV 2.0. Emulate it now.
      Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9a6061d7
    • M
      KVM: PPC: e500: Add support for TLBnPS registers · 307d9008
      Mihai Caraman 提交于
      Add support for TLBnPS registers available in MMU Architecture Version
      (MAV) 2.0.
      Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      307d9008
  7. 22 3月, 2013 1 次提交
  8. 15 2月, 2013 1 次提交
  9. 10 1月, 2013 1 次提交
    • A
      KVM: PPC: BookE: Implement EPR exit · 1c810636
      Alexander Graf 提交于
      The External Proxy Facility in FSL BookE chips allows the interrupt
      controller to automatically acknowledge an interrupt as soon as a
      core gets its pending external interrupt delivered.
      
      Today, user space implements the interrupt controller, so we need to
      check on it during such a cycle.
      
      This patch implements logic for user space to enable EPR exiting,
      disable EPR exiting and EPR exiting itself, so that user space can
      acknowledge an interrupt when an external interrupt has successfully
      been delivered into the guest vcpu.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      1c810636
  10. 14 12月, 2012 2 次提交
  11. 06 12月, 2012 3 次提交
    • A
      KVM: PPC: Make EPCR a valid field for booke64 and bookehv · 62b4db00
      Alexander Graf 提交于
      In BookE, EPCR is defined and valid when either the HV or the 64bit
      category are implemented. Reflect this in the field definition.
      
      Today the only KVM target on 64bit is HV enabled, so there is no
      change in actual source code, but this keeps the code closer to the
      spec and doesn't build up artificial road blocks for a PR KVM
      on 64bit.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      62b4db00
    • P
      KVM: PPC: Book3S HV: Improve handling of local vs. global TLB invalidations · 1b400ba0
      Paul Mackerras 提交于
      When we change or remove a HPT (hashed page table) entry, we can do
      either a global TLB invalidation (tlbie) that works across the whole
      machine, or a local invalidation (tlbiel) that only affects this core.
      Currently we do local invalidations if the VM has only one vcpu or if
      the guest requests it with the H_LOCAL flag, though the guest Linux
      kernel currently doesn't ever use H_LOCAL.  Then, to cope with the
      possibility that vcpus moving around to different physical cores might
      expose stale TLB entries, there is some code in kvmppc_hv_entry to
      flush the whole TLB of entries for this VM if either this vcpu is now
      running on a different physical core from where it last ran, or if this
      physical core last ran a different vcpu.
      
      There are a number of problems on POWER7 with this as it stands:
      
      - The TLB invalidation is done per thread, whereas it only needs to be
        done per core, since the TLB is shared between the threads.
      - With the possibility of the host paging out guest pages, the use of
        H_LOCAL by an SMP guest is dangerous since the guest could possibly
        retain and use a stale TLB entry pointing to a page that had been
        removed from the guest.
      - The TLB invalidations that we do when a vcpu moves from one physical
        core to another are unnecessary in the case of an SMP guest that isn't
        using H_LOCAL.
      - The optimization of using local invalidations rather than global should
        apply to guests with one virtual core, not just one vcpu.
      
      (None of this applies on PPC970, since there we always have to
      invalidate the whole TLB when entering and leaving the guest, and we
      can't support paging out guest memory.)
      
      To fix these problems and simplify the code, we now maintain a simple
      cpumask of which cpus need to flush the TLB on entry to the guest.
      (This is indexed by cpu, though we only ever use the bits for thread
      0 of each core.)  Whenever we do a local TLB invalidation, we set the
      bits for every cpu except the bit for thread 0 of the core that we're
      currently running on.  Whenever we enter a guest, we test and clear the
      bit for our core, and flush the TLB if it was set.
      
      On initial startup of the VM, and when resetting the HPT, we set all the
      bits in the need_tlb_flush cpumask, since any core could potentially have
      stale TLB entries from the previous VM to use the same LPID, or the
      previous contents of the HPT.
      
      Then, we maintain a count of the number of online virtual cores, and use
      that when deciding whether to use a local invalidation rather than the
      number of online vcpus.  The code to make that decision is extracted out
      into a new function, global_invalidates().  For multi-core guests on
      POWER7 (i.e. when we are using mmu notifiers), we now never do local
      invalidations regardless of the H_LOCAL flag.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      1b400ba0
    • P
      KVM: PPC: Book3S HV: Add a mechanism for recording modified HPTEs · 44e5f6be
      Paul Mackerras 提交于
      This uses a bit in our record of the guest view of the HPTE to record
      when the HPTE gets modified.  We use a reserved bit for this, and ensure
      that this bit is always cleared in HPTE values returned to the guest.
      
      The recording of modified HPTEs is only done if other code indicates
      its interest by setting kvm->arch.hpte_mod_interest to a non-zero value.
      The reason for this is that when later commits add facilities for
      userspace to read the HPT, the first pass of reading the HPT will be
      quicker if there are no (or very few) HPTEs marked as modified,
      rather than having most HPTEs marked as modified.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      44e5f6be
  12. 30 10月, 2012 3 次提交
    • P
      KVM: PPC: Book3S HV: Fix accounting of stolen time · c7b67670
      Paul Mackerras 提交于
      Currently the code that accounts stolen time tends to overestimate the
      stolen time, and will sometimes report more stolen time in a DTL
      (dispatch trace log) entry than has elapsed since the last DTL entry.
      This can cause guests to underflow the user or system time measured
      for some tasks, leading to ridiculous CPU percentages and total runtimes
      being reported by top and other utilities.
      
      In addition, the current code was designed for the previous policy where
      a vcore would only run when all the vcpus in it were runnable, and so
      only counted stolen time on a per-vcore basis.  Now that a vcore can
      run while some of the vcpus in it are doing other things in the kernel
      (e.g. handling a page fault), we need to count the time when a vcpu task
      is preempted while it is not running as part of a vcore as stolen also.
      
      To do this, we bring back the BUSY_IN_HOST vcpu state and extend the
      vcpu_load/put functions to count preemption time while the vcpu is
      in that state.  Handling the transitions between the RUNNING and
      BUSY_IN_HOST states requires checking and updating two variables
      (accumulated time stolen and time last preempted), so we add a new
      spinlock, vcpu->arch.tbacct_lock.  This protects both the per-vcpu
      stolen/preempt-time variables, and the per-vcore variables while this
      vcpu is running the vcore.
      
      Finally, we now don't count time spent in userspace as stolen time.
      The task could be executing in userspace on behalf of the vcpu, or
      it could be preempted, or the vcpu could be genuinely stopped.  Since
      we have no way of dividing up the time between these cases, we don't
      count any of it as stolen.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c7b67670
    • P
      KVM: PPC: Book3S HV: Run virtual core whenever any vcpus in it can run · 8455d79e
      Paul Mackerras 提交于
      Currently the Book3S HV code implements a policy on multi-threaded
      processors (i.e. POWER7) that requires all of the active vcpus in a
      virtual core to be ready to run before we run the virtual core.
      However, that causes problems on reset, because reset stops all vcpus
      except vcpu 0, and can also reduce throughput since all four threads
      in a virtual core have to wait whenever any one of them hits a
      hypervisor page fault.
      
      This relaxes the policy, allowing the virtual core to run as soon as
      any vcpu in it is runnable.  With this, the KVMPPC_VCPU_STOPPED state
      and the KVMPPC_VCPU_BUSY_IN_HOST state have been combined into a single
      KVMPPC_VCPU_NOTREADY state, since we no longer need to distinguish
      between them.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      8455d79e
    • P
      KVM: PPC: Book3S HV: Fixes for late-joining threads · 2f12f034
      Paul Mackerras 提交于
      If a thread in a virtual core becomes runnable while other threads
      in the same virtual core are already running in the guest, it is
      possible for the latecomer to join the others on the core without
      first pulling them all out of the guest.  Currently this only happens
      rarely, when a vcpu is first started.  This fixes some bugs and
      omissions in the code in this case.
      
      First, we need to check for VPA updates for the latecomer and make
      a DTL entry for it.  Secondly, if it comes along while the master
      vcpu is doing a VPA update, we don't need to do anything since the
      master will pick it up in kvmppc_run_core.  To handle this correctly
      we introduce a new vcore state, VCORE_STARTING.  Thirdly, there is
      a race because we currently clear the hardware thread's hwthread_req
      before waiting to see it get to nap.  A latecomer thread could have
      its hwthread_req cleared before it gets to test it, and therefore
      never increment the nap_count, leading to messages about wait_for_nap
      timeouts.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      2f12f034
  13. 06 10月, 2012 2 次提交
    • P
      KVM: PPC: Move kvm->arch.slot_phys into memslot.arch · a66b48c3
      Paul Mackerras 提交于
      Now that we have an architecture-specific field in the kvm_memory_slot
      structure, we can use it to store the array of page physical addresses
      that we need for Book3S HV KVM on PPC970 processors.  This reduces the
      size of struct kvm_arch for Book3S HV, and also reduces the size of
      struct kvm_arch_memory_slot for other PPC KVM variants since the fields
      in it are now only compiled in for Book3S HV.
      
      This necessitates making the kvm_arch_create_memslot and
      kvm_arch_free_memslot operations specific to each PPC KVM variant.
      That in turn means that we now don't allocate the rmap arrays on
      Book3S PR and Book E.
      
      Since we now unpin pages and free the slot_phys array in
      kvmppc_core_free_memslot, we no longer need to do it in
      kvmppc_core_destroy_vm, since the generic code takes care to free
      all the memslots when destroying a VM.
      
      We now need the new memslot to be passed in to
      kvmppc_core_prepare_memory_region, since we need to initialize its
      arch.slot_phys member on Book3S HV.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a66b48c3
    • B
      booke: Added ONE_REG interface for IAC/DAC debug registers · 6df8d3fc
      Bharat Bhushan 提交于
      IAC/DAC are defined as 32 bit while they are 64 bit wide. So ONE_REG
      interface is added to set/get them.
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      6df8d3fc