1. 05 8月, 2014 11 次提交
  2. 31 7月, 2014 1 次提交
    • A
      KVM: PPC: PR: Handle FSCR feature deselects · 8e6afa36
      Alexander Graf 提交于
      We handle FSCR feature bits (well, TAR only really today) lazily when the guest
      starts using them. So when a guest activates the bit and later uses that feature
      we enable it for real in hardware.
      
      However, when the guest stops using that bit we don't stop setting it in
      hardware. That means we can potentially lose a trap that the guest expects to
      happen because it thinks a feature is not active.
      
      This patch adds support to drop TAR when then guest turns it off in FSCR. While
      at it it also restricts FSCR access to 64bit systems - 32bit ones don't have it.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      8e6afa36
  3. 30 7月, 2014 3 次提交
  4. 29 7月, 2014 3 次提交
  5. 28 7月, 2014 22 次提交
    • A
      KVM: PPC: Handle magic page in kvmppc_ld/st · c12fb43c
      Alexander Graf 提交于
      We use kvmppc_ld and kvmppc_st to emulate load/store instructions that may as
      well access the magic page. Special case it out so that we can properly access
      it.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c12fb43c
    • A
      KVM: PPC: Move kvmppc_ld/st to common code · 35c4a733
      Alexander Graf 提交于
      We have enough common infrastructure now to resolve GVA->GPA mappings at
      runtime. With this we can move our book3s specific helpers to load / store
      in guest virtual address space to common code as well.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      35c4a733
    • A
      KVM: PPC: Implement kvmppc_xlate for all targets · 7d15c06f
      Alexander Graf 提交于
      We have a nice API to find the translated GPAs of a GVA including protection
      flags. So far we only use it on Book3S, but there's no reason the same shouldn't
      be used on BookE as well.
      
      Implement a kvmppc_xlate() version for BookE and clean it up to make it more
      readable in general.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      7d15c06f
    • A
      KVM: PPC: BOOK3S: HV: Update compute_tlbie_rb to handle 16MB base page · 63fff5c1
      Aneesh Kumar K.V 提交于
      When calculating the lower bits of AVA field, use the shift
      count based on the base page size. Also add the missing segment
      size and remove stale comment.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      63fff5c1
    • S
      Use the POWER8 Micro Partition Prefetch Engine in KVM HV on POWER8 · 9678cdaa
      Stewart Smith 提交于
      The POWER8 processor has a Micro Partition Prefetch Engine, which is
      a fancy way of saying "has way to store and load contents of L2 or
      L2+MRU way of L3 cache". We initiate the storing of the log (list of
      addresses) using the logmpp instruction and start restore by writing
      to a SPR.
      
      The logmpp instruction takes parameters in a single 64bit register:
      - starting address of the table to store log of L2/L2+L3 cache contents
        - 32kb for L2
        - 128kb for L2+L3
        - Aligned relative to maximum size of the table (32kb or 128kb)
      - Log control (no-op, L2 only, L2 and L3, abort logout)
      
      We should abort any ongoing logging before initiating one.
      
      To initiate restore, we write to the MPPR SPR. The format of what to write
      to the SPR is similar to the logmpp instruction parameter:
      - starting address of the table to read from (same alignment requirements)
      - table size (no data, until end of table)
      - prefetch rate (from fastest possible to slower. about every 8, 16, 24 or
        32 cycles)
      
      The idea behind loading and storing the contents of L2/L3 cache is to
      reduce memory latency in a system that is frequently swapping vcores on
      a physical CPU.
      
      The best case scenario for doing this is when some vcores are doing very
      cache heavy workloads. The worst case is when they have about 0 cache hits,
      so we just generate needless memory operations.
      
      This implementation just does L2 store/load. In my benchmarks this proves
      to be useful.
      
      Benchmark 1:
       - 16 core POWER8
       - 3x Ubuntu 14.04LTS guests (LE) with 8 VCPUs each
       - No split core/SMT
       - two guests running sysbench memory test.
         sysbench --test=memory --num-threads=8 run
       - one guest running apache bench (of default HTML page)
         ab -n 490000 -c 400 http://localhost/
      
      This benchmark aims to measure performance of real world application (apache)
      where other guests are cache hot with their own workloads. The sysbench memory
      benchmark does pointer sized writes to a (small) memory buffer in a loop.
      
      In this benchmark with this patch I can see an improvement both in requests
      per second (~5%) and in mean and median response times (again, about 5%).
      The spread of minimum and maximum response times were largely unchanged.
      
      benchmark 2:
       - Same VM config as benchmark 1
       - all three guests running sysbench memory benchmark
      
      This benchmark aims to see if there is a positive or negative affect to this
      cache heavy benchmark. Although due to the nature of the benchmark (stores) we
      may not see a difference in performance, but rather hopefully an improvement
      in consistency of performance (when vcore switched in, don't have to wait
      many times for cachelines to be pulled in)
      
      The results of this benchmark are improvements in consistency of performance
      rather than performance itself. With this patch, the few outliers in duration
      go away and we get more consistent performance in each guest.
      
      benchmark 3:
       - same 3 guests and CPU configuration as benchmark 1 and 2.
       - two idle guests
       - 1 guest running STREAM benchmark
      
      This scenario also saw performance improvement with this patch. On Copy and
      Scale workloads from STREAM, I got 5-6% improvement with this patch. For
      Add and triad, it was around 10% (or more).
      
      benchmark 4:
       - same 3 guests as previous benchmarks
       - two guests running sysbench --memory, distinctly different cache heavy
         workload
       - one guest running STREAM benchmark.
      
      Similar improvements to benchmark 3.
      
      benchmark 5:
       - 1 guest, 8 VCPUs, Ubuntu 14.04
       - Host configured with split core (SMT8, subcores-per-core=4)
       - STREAM benchmark
      
      In this benchmark, we see a 10-20% performance improvement across the board
      of STREAM benchmark results with this patch.
      
      Based on preliminary investigation and microbenchmarks
      by Prerna Saxena <prerna@linux.vnet.ibm.com>
      Signed-off-by: NStewart Smith <stewart@linux.vnet.ibm.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9678cdaa
    • A
      KVM: PPC: Remove 440 support · b2677b8d
      Alexander Graf 提交于
      The 440 target hasn't been properly functioning for a few releases and
      before I was the only one who fixes a very serious bug that indicates to
      me that nobody used it before either.
      
      Furthermore KVM on 440 is slow to the extent of unusable.
      
      We don't have to carry along completely unused code. Remove 440 and give
      us one less thing to worry about.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      b2677b8d
    • B
      KVM: PPC: Remove comment saying SPRG1 is used for vcpu pointer · 8c95ead6
      Bharat Bhushan 提交于
      Scott Wood pointed out that We are no longer using SPRG1 for vcpu pointer,
      but using SPRN_SPRG_THREAD <=> SPRG3 (thread->vcpu). So this comment
      is not valid now.
      
      Note: SPRN_SPRG3R is not supported (do not see any need as of now),
      and if we want to support this in future then we have to shift to using
      SPRG1 for VCPU pointer.
      Signed-off-by: NBharat Bhushan <Bharat.Bhushan@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      8c95ead6
    • B
      kvm: ppc: bookehv: Save restore SPRN_SPRG9 on guest entry exit · 99e99d19
      Bharat Bhushan 提交于
      SPRN_SPRG is used by debug interrupt handler, so this is required for
      debug support.
      Signed-off-by: NBharat Bhushan <Bharat.Bhushan@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      99e99d19
    • M
      KVM: PPC: Allow kvmppc_get_last_inst() to fail · 51f04726
      Mihai Caraman 提交于
      On book3e, guest last instruction is read on the exit path using load
      external pid (lwepx) dedicated instruction. This load operation may fail
      due to TLB eviction and execute-but-not-read entries.
      
      This patch lay down the path for an alternative solution to read the guest
      last instruction, by allowing kvmppc_get_lat_inst() function to fail.
      Architecture specific implmentations of kvmppc_load_last_inst() may read
      last guest instruction and instruct the emulation layer to re-execute the
      guest in case of failure.
      
      Make kvmppc_get_last_inst() definition common between architectures.
      Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      51f04726
    • M
      KVM: PPC: Book3e: Add TLBSEL/TSIZE defines for MAS0/1 · 9c0d4e0d
      Mihai Caraman 提交于
      Add mising defines MAS0_GET_TLBSEL() and MAS1_GET_TSIZE() for Book3E.
      Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9c0d4e0d
    • B
      kvm: ppc: Add SPRN_EPR get helper function · 34f754b9
      Bharat Bhushan 提交于
      kvmppc_set_epr() is already defined in asm/kvm_ppc.h, So
      rename and move get_epr helper function to same file.
      Signed-off-by: NBharat Bhushan <Bharat.Bhushan@freescale.com>
      [agraf: remove duplicate return]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      34f754b9
    • B
      kvm: ppc: booke: Add shared struct helpers of SPRN_ESR · dc168549
      Bharat Bhushan 提交于
      Add and use kvmppc_set_esr() and kvmppc_get_esr() helper functions
      Signed-off-by: NBharat Bhushan <Bharat.Bhushan@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      dc168549
    • B
      kvm: ppc: bookehv: Added wrapper macros for shadow registers · 1dc0c5b8
      Bharat Bhushan 提交于
      There are shadow registers like, GSPRG[0-3], GSRR0, GSRR1 etc on
      BOOKE-HV and these shadow registers are guest accessible.
      So these shadow registers needs to be updated on BOOKE-HV.
      This patch adds new macro for get/set helper of shadow register .
      Signed-off-by: NBharat Bhushan <Bharat.Bhushan@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      1dc0c5b8
    • A
      KVM: PPC: Book3S: Make magic page properly 4k mappable · 89b68c96
      Alexander Graf 提交于
      The magic page is defined as a 4k page of per-vCPU data that is shared
      between the guest and the host to accelerate accesses to privileged
      registers.
      
      However, when the host is using 64k page size granularity we weren't quite
      as strict about that rule anymore. Instead, we partially treated all of the
      upper 64k as magic page and mapped only the uppermost 4k with the actual
      magic contents.
      
      This works well enough for Linux which doesn't use any memory in kernel
      space in the upper 64k, but Mac OS X got upset. So this patch makes magic
      page actually stay in a 4k range even on 64k page size hosts.
      
      This patch fixes magic page usage with Mac OS X (using MOL) on 64k PAGE_SIZE
      hosts for me.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      89b68c96
    • A
      KVM: PPC: Book3S: Add hack for split real mode · c01e3f66
      Alexander Graf 提交于
      Today we handle split real mode by mapping both instruction and data faults
      into a special virtual address space that only exists during the split mode
      phase.
      
      This is good enough to catch 32bit Linux guests that use split real mode for
      copy_from/to_user. In this case we're always prefixed with 0xc0000000 for our
      instruction pointer and can map the user space process freely below there.
      
      However, that approach fails when we're running KVM inside of KVM. Here the 1st
      level last_inst reader may well be in the same virtual page as a 2nd level
      interrupt handler.
      
      It also fails when running Mac OS X guests. Here we have a 4G/4G split, so a
      kernel copy_from/to_user implementation can easily overlap with user space
      addresses.
      
      The architecturally correct way to fix this would be to implement an instruction
      interpreter in KVM that kicks in whenever we go into split real mode. This
      interpreter however would not receive a great amount of testing and be a lot of
      bloat for a reasonably isolated corner case.
      
      So I went back to the drawing board and tried to come up with a way to make
      split real mode work with a single flat address space. And then I realized that
      we could get away with the same trick that makes it work for Linux:
      
      Whenever we see an instruction address during split real mode that may collide,
      we just move it higher up the virtual address space to a place that hopefully
      does not collide (keep your fingers crossed!).
      
      That approach does work surprisingly well. I am able to successfully run
      Mac OS X guests with KVM and QEMU (no split real mode hacks like MOL) when I
      apply a tiny timing probe hack to QEMU. I'd say this is a win over even more
      broken split real mode :).
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c01e3f66
    • A
      KVM: PPC: Book3S: Move vcore definition to end of kvm_arch struct · 1287cb3f
      Alexander Graf 提交于
      When building KVM with a lot of vcores (NR_CPUS is big), we can potentially
      get out of the ld immediate range for dereferences inside that struct.
      
      Move the array to the end of our kvm_arch struct. This fixes compilation
      issues with NR_CPUS=2048 for me.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      1287cb3f
    • M
      KVM: PPC: e500: Emulate power management control SPR · debf27d6
      Mihai Caraman 提交于
      For FSL e6500 core the kernel uses power management SPR register (PWRMGTCR0)
      to enable idle power down for cores and devices by setting up the idle count
      period at boot time. With the host already controlling the power management
      configuration the guest could simply benefit from it, so emulate guest request
      as a general store.
      Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      debf27d6
    • A
      KVM: PPC: Book3S HV: Make HTAB code LE host aware · 6f22bd32
      Alexander Graf 提交于
      When running on an LE host all data structures are kept in little endian
      byte order. However, the HTAB still needs to be maintained in big endian.
      
      So every time we access any HTAB we need to make sure we do so in the right
      byte order. Fix up all accesses to manually byte swap.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      6f22bd32
    • A
      PPC: Add asm helpers for BE 32bit load/store · 8f6822c4
      Alexander Graf 提交于
      From assembly code we might not only have to explicitly BE access 64bit values,
      but sometimes also 32bit ones. Add helpers that allow for easy use of lwzx/stwx
      in their respective byte-reverse or native form.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      8f6822c4
    • M
      KVM: PPC: e500: Fix default tlb for victim hint · d57cef91
      Mihai Caraman 提交于
      Tlb search operation used for victim hint relies on the default tlb set by the
      host. When hardware tablewalk support is enabled in the host, the default tlb is
      TLB1 which leads KVM to evict the bolted entry. Set and restore the default tlb
      when searching for victim hint.
      Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com>
      Reviewed-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      d57cef91
    • M
      KVM: PPC: Book3S HV: Add H_SET_MODE hcall handling · 9642382e
      Michael Neuling 提交于
      This adds support for the H_SET_MODE hcall.  This hcall is a
      multiplexer that has several functions, some of which are called
      rarely, and some which are potentially called very frequently.
      Here we add support for the functions that set the debug registers
      CIABR (Completed Instruction Address Breakpoint Register) and
      DAWR/DAWRX (Data Address Watchpoint Register and eXtension),
      since they could be updated by the guest as often as every context
      switch.
      
      This also adds a kvmppc_power8_compatible() function to test to see
      if a guest is compatible with POWER8 or not.  The CIABR and DAWR/X
      only exist on POWER8.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9642382e
    • P
      KVM: PPC: Book3S: Allow only implemented hcalls to be enabled or disabled · ae2113a4
      Paul Mackerras 提交于
      This adds code to check that when the KVM_CAP_PPC_ENABLE_HCALL
      capability is used to enable or disable in-kernel handling of an
      hcall, that the hcall is actually implemented by the kernel.
      If not an EINVAL error is returned.
      
      This also checks the default-enabled list of hcalls and prints a
      warning if any hcall there is not actually implemented.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      ae2113a4