1. 20 3月, 2014 1 次提交
    • S
      powerpc/booke64: Use SPRG7 for VDSO · 9d378dfa
      Scott Wood 提交于
      Previously SPRG3 was marked for use by both VDSO and critical
      interrupts (though critical interrupts were not fully implemented).
      
      In commit 8b64a9df ("powerpc/booke64:
      Use SPRG0/3 scratch for bolted TLB miss & crit int"), Mihai Caraman
      made an attempt to resolve this conflict by restoring the VDSO value
      early in the critical interrupt, but this has some issues:
      
       - It's incompatible with EXCEPTION_COMMON which restores r13 from the
         by-then-overwritten scratch (this cost me some debugging time).
       - It forces critical exceptions to be a special case handled
         differently from even machine check and debug level exceptions.
       - It didn't occur to me that it was possible to make this work at all
         (by doing a final "ld r13, PACA_EXCRIT+EX_R13(r13)") until after
         I made (most of) this patch. :-)
      
      It might be worth investigating using a load rather than SPRG on return
      from all exceptions (except TLB misses where the scratch never leaves
      the SPRG) -- it could save a few cycles.  Until then, let's stick with
      SPRG for all exceptions.
      
      Since we cannot use SPRG4-7 for scratch without corrupting the state of
      a KVM guest, move VDSO to SPRG7 on book3e.  Since neither SPRG4-7 nor
      critical interrupts exist on book3s, SPRG3 is still used for VDSO
      there.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Cc: Mihai Caraman <mihai.caraman@freescale.com>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: kvm-ppc@vger.kernel.org
      9d378dfa
  2. 09 12月, 2013 2 次提交
    • A
      KVM: PPC: Book3S: PR: Enable interrupts earlier · 3d3319b4
      Alexander Graf 提交于
      Now that the svcpu sync is interrupt aware we can enable interrupts
      earlier in the exit code path again, moving 32bit and 64bit closer
      together.
      
      While at it, document the fact that we're always executing the exit
      path with interrupts enabled so that the next person doesn't trap
      over this.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3d3319b4
    • A
      KVM: PPC: Book3S: PR: Don't clobber our exit handler id · d825a043
      Alexander Graf 提交于
      We call a C helper to save all svcpu fields into our vcpu. The C
      ABI states that r12 is considered volatile. However, we keep our
      exit handler id in r12 currently.
      
      So we need to save it away into a non-volatile register instead
      that definitely does get preserved across the C call.
      
      This bug usually didn't hit anyone yet since gcc is smart enough
      to generate code that doesn't even need r12 which means it stayed
      identical throughout the call by sheer luck. But we can't rely on
      that.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      d825a043
  3. 17 10月, 2013 3 次提交
    • A
      kvm: powerpc: Add kvmppc_ops callback · 3a167bea
      Aneesh Kumar K.V 提交于
      This patch add a new callback kvmppc_ops. This will help us in enabling
      both HV and PR KVM together in the same kernel. The actual change to
      enable them together is done in the later patch in the series.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      [agraf: squash in booke changes]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3a167bea
    • P
      kvm: powerpc: book3s: remove kvmppc_handler_highmem label · 178db620
      Paul Mackerras 提交于
      This label is not used now.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      178db620
    • P
      KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu · a2d56020
      Paul Mackerras 提交于
      Currently PR-style KVM keeps the volatile guest register values
      (R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
      the main kvm_vcpu struct.  For 64-bit, the shadow_vcpu exists in two
      places, a kmalloc'd struct and in the PACA, and it gets copied back
      and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
      can't rely on being able to access the kmalloc'd struct.
      
      This changes the code to copy the volatile values into the shadow_vcpu
      as one of the last things done before entering the guest.  Similarly
      the values are copied back out of the shadow_vcpu to the kvm_vcpu
      immediately after exiting the guest.  We arrange for interrupts to be
      still disabled at this point so that we can't get preempted on 64-bit
      and end up copying values from the wrong PACA.
      
      This means that the accessor functions in kvm_book3s.h for these
      registers are greatly simplified, and are same between PR and HV KVM.
      In places where accesses to shadow_vcpu fields are now replaced by
      accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
      Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
      
      With this, the time to read the PVR one million times in a loop went
      from 567.7ms to 575.5ms (averages of 6 values), an increase of about
      1.4% for this worse-case test for guest entries and exits.  The
      standard deviation of the measurements is about 11ms, so the
      difference is only marginally significant statistically.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a2d56020
  4. 25 7月, 2013 1 次提交
  5. 10 7月, 2012 2 次提交
  6. 03 4月, 2012 1 次提交
  7. 26 9月, 2011 1 次提交
    • P
      KVM: PPC: book3s_pr: Simplify transitions between virtual and real mode · 02143947
      Paul Mackerras 提交于
      This simplifies the way that the book3s_pr makes the transition to
      real mode when entering the guest.  We now call kvmppc_entry_trampoline
      (renamed from kvmppc_rmcall) in the base kernel using a normal function
      call instead of doing an indirect call through a pointer in the vcpu.
      If kvm is a module, the module loader takes care of generating a
      trampoline as it does for other calls to functions outside the module.
      
      kvmppc_entry_trampoline then disables interrupts and jumps to
      kvmppc_handler_trampoline_enter in real mode using an rfi[d].
      That then uses the link register as the address to return to
      (potentially in module space) when the guest exits.
      
      This also simplifies the way that we call the Linux interrupt handler
      when we exit the guest due to an external, decrementer or performance
      monitor interrupt.  Instead of turning on the MMU, then deciding that
      we need to call the Linux handler and turning the MMU back off again,
      we now go straight to the handler at the point where we would turn the
      MMU on.  The handler will then return to the virtual-mode code
      (potentially in the module).
      
      Along the way, this moves the setting and clearing of the HID5 DCBZ32
      bit into real-mode interrupts-off code, and also makes sure that
      we clear the MSR[RI] bit before loading values into SRR0/1.
      
      The net result is that we no longer need any code addresses to be
      stored in vcpu->arch.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      02143947
  8. 12 7月, 2011 2 次提交
    • P
      KVM: PPC: Split host-state fields out of kvmppc_book3s_shadow_vcpu · 3c42bf8a
      Paul Mackerras 提交于
      There are several fields in struct kvmppc_book3s_shadow_vcpu that
      temporarily store bits of host state while a guest is running,
      rather than anything relating to the particular guest or vcpu.
      This splits them out into a new kvmppc_host_state structure and
      modifies the definitions in asm-offsets.c to suit.
      
      On 32-bit, we have a kvmppc_host_state structure inside the
      kvmppc_book3s_shadow_vcpu since the assembly code needs to be able
      to get to them both with one pointer.  On 64-bit they are separate
      fields in the PACA.  This means that on 64-bit we don't need to
      copy the kvmppc_host_state in and out on vcpu load/unload, and
      in future will mean that the book3s_hv code doesn't need a
      shadow_vcpu struct in the PACA at all.  That does mean that we
      have to be careful not to rely on any values persisting in the
      hstate field of the paca across any point where we could block
      or get preempted.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3c42bf8a
    • P
      KVM: PPC: Move guest enter/exit down into subarch-specific code · df6909e5
      Paul Mackerras 提交于
      Instead of doing the kvm_guest_enter/exit() and local_irq_dis/enable()
      calls in powerpc.c, this moves them down into the subarch-specific
      book3s_pr.c and booke.c.  This eliminates an extra local_irq_enable()
      call in book3s_pr.c, and will be needed for when we do SMT4 guest
      support in the book3s hypervisor mode code.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      df6909e5
  9. 17 5月, 2010 4 次提交
  10. 01 3月, 2010 6 次提交
    • A
      KVM: PPC: Keep SRR1 flags around in shadow_msr · f7adbba1
      Alexander Graf 提交于
      SRR1 stores more information that just the MSR value. It also stores
      valuable information about the type of interrupt we received, for
      example whether the storage interrupt we just got was because of a
      missing htab entry or not.
      
      We use that information to speed up the exit path.
      
      Now if we get preempted before we can interpret the shadow_msr values,
      we get into vcpu_put which then calls the MSR handler, which then sets
      all the SRR1 information bits in shadow_msr to 0. Great.
      
      So let's preserve the SRR1 specific bits in shadow_msr whenever we set
      the MSR. They don't hurt.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      f7adbba1
    • A
      KVM: PPC: Fix HID5 setting code · d35feb26
      Alexander Graf 提交于
      The code to unset HID5.dcbz32 is broken.
      This patch makes it do the right rotate magic.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Reported-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d35feb26
    • A
      KVM: PPC: Call SLB patching code in interrupt safe manner · 021ec9c6
      Alexander Graf 提交于
      Currently we're racy when doing the transition from IR=1 to IR=0, from
      the module memory entry code to the real mode SLB switching code.
      
      To work around that I took a look at the RTAS entry code which is faced
      with a similar problem and did the same thing:
      
        A small helper in linear mapped memory that does mtmsr with IR=0 and
        then RFIs info the actual handler.
      
      Thanks to that trick we can safely take page faults in the entry code
      and only need to be really wary of what to do as of the SLB switching
      part.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      021ec9c6
    • A
      KVM: PPC: Get rid of unnecessary RFI · bc90923e
      Alexander Graf 提交于
      Using an RFI in IR=1 is dangerous. We need to set two SRRs and then do an RFI
      without getting interrupted at all, because every interrupt could potentially
      overwrite the SRR values.
      
      Fortunately, we don't need to RFI in at least this particular case of the code,
      so we can just replace it with an mtmsr and b.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      bc90923e
    • A
      KVM: PPC: Use PACA backed shadow vcpu · 7e57cba0
      Alexander Graf 提交于
      We're being horribly racy right now. All the entry and exit code hijacks
      random fields from the PACA that could easily be used by different code in
      case we get interrupted, for example by a #MC or even page fault.
      
      After discussing this with Ben, we figured it's best to reserve some more
      space in the PACA and just shove off some vcpu state to there.
      
      That way we can drastically improve the readability of the code, make it
      less racy and less complex.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      7e57cba0
    • A
      KVM: PPC: Enable lightweight exits again · 97c4cfbe
      Alexander Graf 提交于
      The PowerPC C ABI defines that registers r14-r31 need to be preserved across
      function calls. Since our exit handler is written in C, we can make use of that
      and don't need to reload r14-r31 on every entry/exit cycle.
      
      This technique is also used in the BookE code and is called "lightweight exits"
      there. To follow the tradition, it's called the same in Book3S.
      
      So far this optimization was disabled though, as the code didn't do what it was
      expected to do, but failed to work.
      
      This patch fixes and enables lightweight exits again.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      97c4cfbe
  11. 05 11月, 2009 1 次提交