1. 17 10月, 2013 4 次提交
    • P
      KVM: PPC: Book3S PR: Make HPT accesses and updates SMP-safe · 9308ab8e
      Paul Mackerras 提交于
      This adds a per-VM mutex to provide mutual exclusion between vcpus
      for accesses to and updates of the guest hashed page table (HPT).
      This also makes the code use single-byte writes to the HPT entry
      when updating of the reference (R) and change (C) bits.  The reason
      for doing this, rather than writing back the whole HPTE, is that on
      non-PAPR virtual machines, the guest OS might be writing to the HPTE
      concurrently, and writing back the whole HPTE might conflict with
      that.  Also, real hardware does single-byte writes to update R and C.
      
      The new mutex is taken in kvmppc_mmu_book3s_64_xlate() when reading
      the HPT and updating R and/or C, and in the PAPR HPT update hcalls
      (H_ENTER, H_REMOVE, etc.).  Having the mutex means that we don't need
      to use a hypervisor lock bit in the HPT update hcalls, and we don't
      need to be careful about the order in which the bytes of the HPTE are
      updated by those hcalls.
      
      The other change here is to make emulated TLB invalidations (tlbie)
      effective across all vcpus.  To do this we call kvmppc_mmu_pte_vflush
      for all vcpus in kvmppc_ppc_book3s_64_tlbie().
      
      For 32-bit, this makes the setting of the accessed and dirty bits use
      single-byte writes, and makes tlbie invalidate shadow HPTEs for all
      vcpus.
      
      With this, PR KVM can successfully run SMP guests.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9308ab8e
    • P
      KVM: PPC: Book3S PR: Handle PP0 page-protection bit in guest HPTEs · 03a9c903
      Paul Mackerras 提交于
      64-bit POWER processors have a three-bit field for page protection in
      the hashed page table entry (HPTE).  Currently we only interpret the two
      bits that were present in older versions of the architecture.  The only
      defined combination that has the new bit set is 110, meaning read-only
      for supervisor and no access for user mode.
      
      This adds code to kvmppc_mmu_book3s_64_xlate() to interpret the extra
      bit appropriately.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      03a9c903
    • P
      KVM: PPC: Book3S PR: Use 64k host pages where possible · c9029c34
      Paul Mackerras 提交于
      Currently, PR KVM uses 4k pages for the host-side mappings of guest
      memory, regardless of the host page size.  When the host page size is
      64kB, we might as well use 64k host page mappings for guest mappings
      of 64kB and larger pages and for guest real-mode mappings.  However,
      the magic page has to remain a 4k page.
      
      To implement this, we first add another flag bit to the guest VSID
      values we use, to indicate that this segment is one where host pages
      should be mapped using 64k pages.  For segments with this bit set
      we set the bits in the shadow SLB entry to indicate a 64k base page
      size.  When faulting in host HPTEs for this segment, we make them
      64k HPTEs instead of 4k.  We record the pagesize in struct hpte_cache
      for use when invalidating the HPTE.
      
      For now we restrict the segment containing the magic page (if any) to
      4k pages.  It should be possible to lift this restriction in future
      by ensuring that the magic 4k page is appropriately positioned within
      a host 64k page.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c9029c34
    • P
      KVM: PPC: Book3S PR: Allow guest to use 64k pages · a4a0f252
      Paul Mackerras 提交于
      This adds the code to interpret 64k HPTEs in the guest hashed page
      table (HPT), 64k SLB entries, and to tell the guest about 64k pages
      in kvm_vm_ioctl_get_smmu_info().  Guest 64k pages are still shadowed
      by 4k pages.
      
      This also adds another hash table to the four we have already in
      book3s_mmu_hpte.c to allow us to find all the PTEs that we have
      instantiated that match a given 64k guest page.
      
      The tlbie instruction changed starting with POWER6 to use a bit in
      the RB operand to indicate large page invalidations, and to use other
      RB bits to indicate the base and actual page sizes and the segment
      size.  64k pages came in slightly earlier, with POWER5++.
      We use one bit in vcpu->arch.hflags to indicate that the emulated
      cpu supports 64k pages, and another to indicate that it has the new
      tlbie definition.
      
      The KVM_PPC_GET_SMMU_INFO ioctl presents a bit of a problem, because
      the MMU capabilities depend on which CPU model we're emulating, but it
      is a VM ioctl not a VCPU ioctl and therefore doesn't get passed a VCPU
      fd.  In addition, commonly-used userspace (QEMU) calls it before
      setting the PVR for any VCPU.  Therefore, as a best effort we look at
      the first vcpu in the VM and return 64k pages or not depending on its
      capabilities.  We also make the PVR default to the host PVR on recent
      CPUs that support 1TB segments (and therefore multiple page sizes as
      well) so that KVM_PPC_GET_SMMU_INFO will include 64k page and 1TB
      segment support on those CPUs.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a4a0f252
  2. 29 8月, 2013 1 次提交
  3. 30 6月, 2013 3 次提交
    • P
      KVM: PPC: Book3S PR: Invalidate SLB entries properly · 681562cd
      Paul Mackerras 提交于
      At present, if the guest creates a valid SLB (segment lookaside buffer)
      entry with the slbmte instruction, then invalidates it with the slbie
      instruction, then reads the entry with the slbmfee/slbmfev instructions,
      the result of the slbmfee will have the valid bit set, even though the
      entry is not actually considered valid by the host.  This is confusing,
      if not worse.  This fixes it by zeroing out the orige and origv fields
      of the SLB entry structure when the entry is invalidated.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      681562cd
    • P
      KVM: PPC: Book3S PR: Allow guest to use 1TB segments · 0f296829
      Paul Mackerras 提交于
      With this, the guest can use 1TB segments as well as 256MB segments.
      Since we now have the situation where a single emulated guest segment
      could correspond to multiple shadow segments (as the shadow segments
      are still 256MB segments), this adds a new kvmppc_mmu_flush_segment()
      to scan for all shadow segments that need to be removed.
      
      This restructures the guest HPT (hashed page table) lookup code to
      use the correct hashing and matching functions for HPTEs within a
      1TB segment.  We use the standard hpt_hash() function instead of
      open-coding the hash calculation, and we use HPTE_V_COMPARE() with
      an AVPN value that has the B (segment size) field included.  The
      calculation of avpn is done a little earlier since it doesn't change
      in the loop starting at the do_second label.
      
      The computation in kvmppc_mmu_book3s_64_esid_to_vsid() changes so that
      it returns a 256MB VSID even if the guest SLB entry is a 1TB entry.
      This is because the users of this function are creating 256MB SLB
      entries.  We set a new VSID_1T flag so that entries created from 1T
      segments don't collide with entries from 256MB segments.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      0f296829
    • P
      KVM: PPC: Book3S PR: Don't keep scanning HPTEG after we find a match · 6ed1485f
      Paul Mackerras 提交于
      The loop in kvmppc_mmu_book3s_64_xlate() that looks up a translation
      in the guest hashed page table (HPT) keeps going if it finds an
      HPTE that matches but doesn't allow access.  This is incorrect; it
      is different from what the hardware does, and there should never be
      more than one matching HPTE anyway.  This fixes it to stop when any
      matching HPTE is found.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      6ed1485f
  4. 26 9月, 2011 1 次提交
    • A
      KVM: PPC: Interpret SDR1 as HVA in PAPR mode · 04fcc11b
      Alexander Graf 提交于
      When running a PAPR guest, the guest is not allowed to set SDR1 - instead
      the HTAB information is held in internal hypervisor structures. But all of
      our current code relies on SDR1 and walking the HTAB like on real hardware.
      
      So in order to not be too intrusive, we simply set SDR1 to the HTAB we hold
      in host memory. That way we can keep the HTAB in user space, but use it from
      kernel space to map the guest.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      04fcc11b
  5. 12 7月, 2011 1 次提交
  6. 24 10月, 2010 2 次提交
    • A
      KVM: PPC: Magic Page Book3s support · e8508940
      Alexander Graf 提交于
      We need to override EA as well as PA lookups for the magic page. When the guest
      tells us to project it, the magic page overrides any guest mappings.
      
      In order to reflect that, we need to hook into all the MMU layers of KVM to
      force map the magic page if necessary.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      e8508940
    • A
      KVM: PPC: Convert MSR to shared page · 666e7252
      Alexander Graf 提交于
      One of the most obvious registers to share with the guest directly is the
      MSR. The MSR contains the "interrupts enabled" flag which the guest has to
      toggle in critical sections.
      
      So in order to bring the overhead of interrupt en- and disabling down, let's
      put msr into the shared page. Keep in mind that even though you can fully read
      its contents, writing to it doesn't always update all state. There are a few
      safe fields that don't require hypervisor interaction. See the documentation
      for a list of MSR bits that are safe to be set from inside the guest.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      666e7252
  7. 17 5月, 2010 4 次提交
    • A
      KVM: PPC: Set VSID_PR also for Book3S_64 · 63556441
      Alexander Graf 提交于
      Book3S_64 didn't set VSID_PR when we're in PR=1. This lead to pretty bad
      behavior when searching for the shadow segment, as part of the code relied
      on VSID_PR being set.
      
      This patch fixes booting Book3S_64 guests.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      63556441
    • A
      KVM: PPC: Improve split mode · f7bc74e1
      Alexander Graf 提交于
      When in split mode, instruction relocation and data relocation are not equal.
      
      So far we implemented this mode by reserving a special pseudo-VSID for the
      two cases and flushing all PTEs when going into split mode, which is slow.
      
      Unfortunately 32bit Linux and Mac OS X use split mode extensively. So to not
      slow down things too much, I came up with a different idea: Mark the split
      mode with a bit in the VSID and then treat it like any other segment.
      
      This means we can just flush the shadow segment cache, but keep the PTEs
      intact. I verified that this works with ppc32 Linux and Mac OS X 10.4
      guests and does speed them up.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      f7bc74e1
    • A
      KVM: PPC: Convert u64 -> ulong · af7b4d10
      Alexander Graf 提交于
      There are some pieces in the code that I overlooked that still use
      u64s instead of longs. This slows down 32 bit hosts unnecessarily, so
      let's just move them to ulong.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      af7b4d10
    • A
      KVM: PPC: Improve indirect svcpu accessors · c7f38f46
      Alexander Graf 提交于
      We already have some inline fuctions we use to access vcpu or svcpu structs,
      depending on whether we're on booke or book3s. Since we just put a few more
      registers into the svcpu, we also need to make sure the respective callbacks
      are available and get used.
      
      So this patch moves direct use of the now in the svcpu struct fields to
      inline function calls. While at it, it also moves the definition of those
      inline function calls to respective header files for booke and book3s,
      greatly improving readability.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      c7f38f46
  8. 01 3月, 2010 1 次提交
    • A
      KVM: PPC: Make large pages work · 4b5c9b7f
      Alexander Graf 提交于
      An SLB entry contains two pieces of information related to size:
      
        1) PTE size
        2) SLB size
      
      The L bit defines the PTE be "large" (usually means 16MB),
      SLB_VSID_B_1T defines that the SLB should span 1 GB instead of the
      default 256MB.
      
      Apparently I messed things up and just put those two in one box,
      shaked it heavily and came up with the current code which handles
      large pages incorrectly, because it also treats large page SLB entries
      as "1TB" segment entries.
      
      This patch splits those two features apart, making Linux guests boot
      even when they have > 256MB.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      4b5c9b7f
  9. 27 12月, 2009 1 次提交
  10. 08 12月, 2009 1 次提交
  11. 05 11月, 2009 1 次提交