1. 07 7月, 2014 2 次提交
  2. 09 1月, 2014 1 次提交
    • P
      KVM: PPC: Use load_fp/vr_state rather than load_up_fpu/altivec · 09548fda
      Paul Mackerras 提交于
      The load_up_fpu and load_up_altivec functions were never intended to
      be called from C, and do things like modifying the MSR value in their
      callers' stack frames, which are assumed to be interrupt frames.  In
      addition, on 32-bit Book S they require the MMU to be off.
      
      This makes KVM use the new load_fp_state() and load_vr_state() functions
      instead of load_up_fpu/altivec.  This means we can remove the assembler
      glue in book3s_rmhandlers.S, and potentially fixes a bug on Book E,
      where load_up_fpu was called directly from C.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      09548fda
  3. 09 12月, 2013 1 次提交
    • A
      KVM: PPC: Book3S: PR: Enable interrupts earlier · 3d3319b4
      Alexander Graf 提交于
      Now that the svcpu sync is interrupt aware we can enable interrupts
      earlier in the exit code path again, moving 32bit and 64bit closer
      together.
      
      While at it, document the fact that we're always executing the exit
      path with interrupts enabled so that the next person doesn't trap
      over this.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3d3319b4
  4. 17 10月, 2013 2 次提交
    • P
      KVM: PPC: Book3S: Move skip-interrupt handlers to common code · 4f6c11db
      Paul Mackerras 提交于
      Both PR and HV KVM have separate, identical copies of the
      kvmppc_skip_interrupt and kvmppc_skip_Hinterrupt handlers that are
      used for the situation where an interrupt happens when loading the
      instruction that caused an exit from the guest.  To eliminate this
      duplication and make it easier to compile in both PR and HV KVM,
      this moves this code to arch/powerpc/kernel/exceptions-64s.S along
      with other kernel interrupt handler code.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      4f6c11db
    • P
      KVM: PPC: Book3S PR: Keep volatile reg values in vcpu rather than shadow_vcpu · a2d56020
      Paul Mackerras 提交于
      Currently PR-style KVM keeps the volatile guest register values
      (R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than
      the main kvm_vcpu struct.  For 64-bit, the shadow_vcpu exists in two
      places, a kmalloc'd struct and in the PACA, and it gets copied back
      and forth in kvmppc_core_vcpu_load/put(), because the real-mode code
      can't rely on being able to access the kmalloc'd struct.
      
      This changes the code to copy the volatile values into the shadow_vcpu
      as one of the last things done before entering the guest.  Similarly
      the values are copied back out of the shadow_vcpu to the kvm_vcpu
      immediately after exiting the guest.  We arrange for interrupts to be
      still disabled at this point so that we can't get preempted on 64-bit
      and end up copying values from the wrong PACA.
      
      This means that the accessor functions in kvm_book3s.h for these
      registers are greatly simplified, and are same between PR and HV KVM.
      In places where accesses to shadow_vcpu fields are now replaced by
      accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs.
      Finally, on 64-bit, we don't need the kmalloc'd struct at all any more.
      
      With this, the time to read the PVR one million times in a loop went
      from 567.7ms to 575.5ms (averages of 6 values), an increase of about
      1.4% for this worse-case test for guest entries and exits.  The
      standard deviation of the measurements is about 11ms, so the
      difference is only marginally significant statistically.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a2d56020
  5. 06 12月, 2012 1 次提交
    • P
      KVM: PPC: Book3S PR: Fix VSX handling · 28c483b6
      Paul Mackerras 提交于
      This fixes various issues in how we were handling the VSX registers
      that exist on POWER7 machines.  First, we were running off the end
      of the current->thread.fpr[] array.  Ultimately this was because the
      vcpu->arch.vsr[] array is sized to be able to store both the FP
      registers and the extra VSX registers (i.e. 64 entries), but PR KVM
      only uses it for the extra VSX registers (i.e. 32 entries).
      
      Secondly, calling load_up_vsx() from C code is a really bad idea,
      because it jumps to fast_exception_return at the end, rather than
      returning with a blr instruction.  This was causing it to jump off
      to a random location with random register contents, since it was using
      the largely uninitialized stack frame created by kvmppc_load_up_vsx.
      
      In fact, it isn't necessary to call either __giveup_vsx or load_up_vsx,
      since giveup_fpu and load_up_fpu handle the extra VSX registers as well
      as the standard FP registers on machines with VSX.  Also, since VSX
      instructions can access the VMX registers and the FP registers as well
      as the extra VSX registers, we have to load up the FP and VMX registers
      before we can turn on the MSR_VSX bit for the guest.  Conversely, if
      we save away any of the VSX or FP registers, we have to turn off MSR_VSX
      for the guest.
      
      To handle all this, it is more convenient for a single call to
      kvmppc_giveup_ext() to handle all the state saving that needs to be done,
      so we make it take a set of MSR bits rather than just one, and the switch
      statement becomes a series of if statements.  Similarly kvmppc_handle_ext
      needs to be able to load up more than one set of registers.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      28c483b6
  6. 06 10月, 2012 1 次提交
    • A
      KVM: PPC: Book3S: PR: Rework irq disabling · bd2be683
      Alexander Graf 提交于
      Today, we disable preemption while inside guest context, because we need
      to expose to the world that we are not in a preemptible context. However,
      during that time we already have interrupts disabled, which would indicate
      that we are in a non-preemptible context.
      
      The reason the checks for irqs_disabled() fail for us though is that we
      manually control hard IRQs and ignore all the lazy EE framework. Let's
      stop doing that. Instead, let's always use lazy EE to indicate when we
      want to disable IRQs, but do a special final switch that gets us into
      EE disabled, but soft enabled state. That way when we get back out of
      guest state, we are immediately ready to process interrupts.
      
      This simplifies the code drastically and reduces the time that we appear
      as preempt disabled.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      bd2be683
  7. 31 7月, 2012 1 次提交
  8. 10 7月, 2012 1 次提交
  9. 26 9月, 2011 2 次提交
    • P
      KVM: PPC: book3s_pr: Simplify transitions between virtual and real mode · 02143947
      Paul Mackerras 提交于
      This simplifies the way that the book3s_pr makes the transition to
      real mode when entering the guest.  We now call kvmppc_entry_trampoline
      (renamed from kvmppc_rmcall) in the base kernel using a normal function
      call instead of doing an indirect call through a pointer in the vcpu.
      If kvm is a module, the module loader takes care of generating a
      trampoline as it does for other calls to functions outside the module.
      
      kvmppc_entry_trampoline then disables interrupts and jumps to
      kvmppc_handler_trampoline_enter in real mode using an rfi[d].
      That then uses the link register as the address to return to
      (potentially in module space) when the guest exits.
      
      This also simplifies the way that we call the Linux interrupt handler
      when we exit the guest due to an external, decrementer or performance
      monitor interrupt.  Instead of turning on the MMU, then deciding that
      we need to call the Linux handler and turning the MMU back off again,
      we now go straight to the handler at the point where we would turn the
      MMU on.  The handler will then return to the virtual-mode code
      (potentially in the module).
      
      Along the way, this moves the setting and clearing of the HID5 DCBZ32
      bit into real-mode interrupts-off code, and also makes sure that
      we clear the MSR[RI] bit before loading values into SRR0/1.
      
      The net result is that we no longer need any code addresses to be
      stored in vcpu->arch.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      02143947
    • P
      KVM: PPC: Assemble book3s{,_hv}_rmhandlers.S separately · 177339d7
      Paul Mackerras 提交于
      This makes arch/powerpc/kvm/book3s_rmhandlers.S and
      arch/powerpc/kvm/book3s_hv_rmhandlers.S be assembled as
      separate compilation units rather than having them #included in
      arch/powerpc/kernel/exceptions-64s.S.  We no longer have any
      conditional branches between the exception prologs in
      exceptions-64s.S and the KVM handlers, so there is no need to
      keep their contents close together in the vmlinux image.
      
      In their current location, they are using up part of the limited
      space between the first-level interrupt handlers and the firmware
      NMI data area at offset 0x7000, and with some kernel configurations
      this area will overflow (e.g. allyesconfig), leading to an
      "attempt to .org backwards" error when compiling exceptions-64s.S.
      
      Moving them out requires that we add some #includes that the
      book3s_{,hv_}rmhandlers.S code was previously getting implicitly
      via exceptions-64s.S.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      177339d7
  10. 12 7月, 2011 3 次提交
    • P
      KVM: PPC: Split host-state fields out of kvmppc_book3s_shadow_vcpu · 3c42bf8a
      Paul Mackerras 提交于
      There are several fields in struct kvmppc_book3s_shadow_vcpu that
      temporarily store bits of host state while a guest is running,
      rather than anything relating to the particular guest or vcpu.
      This splits them out into a new kvmppc_host_state structure and
      modifies the definitions in asm-offsets.c to suit.
      
      On 32-bit, we have a kvmppc_host_state structure inside the
      kvmppc_book3s_shadow_vcpu since the assembly code needs to be able
      to get to them both with one pointer.  On 64-bit they are separate
      fields in the PACA.  This means that on 64-bit we don't need to
      copy the kvmppc_host_state in and out on vcpu load/unload, and
      in future will mean that the book3s_hv code doesn't need a
      shadow_vcpu struct in the PACA at all.  That does mean that we
      have to be careful not to rely on any values persisting in the
      hstate field of the paca across any point where we could block
      or get preempted.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3c42bf8a
    • P
      powerpc, KVM: Rework KVM checks in first-level interrupt handlers · b01c8b54
      Paul Mackerras 提交于
      Instead of branching out-of-line with the DO_KVM macro to check if we
      are in a KVM guest at the time of an interrupt, this moves the KVM
      check inline in the first-level interrupt handlers.  This speeds up
      the non-KVM case and makes sure that none of the interrupt handlers
      are missing the check.
      
      Because the first-level interrupt handlers are now larger, some things
      had to be move out of line in exceptions-64s.S.
      
      This all necessitated some minor changes to the interrupt entry code
      in KVM.  This also streamlines the book3s_32 KVM test.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      b01c8b54
    • A
      KVM: PPC: Resolve real-mode handlers through function exports · a22a2dac
      Alexander Graf 提交于
      Up until now, Book3S KVM had variables stored in the kernel that a kernel module
      or the kvm code in the kernel could read from to figure out where some real mode
      helper functions are located.
      
      This is all unnecessary. The high bits of the EA get ignore in real mode, so we
      can just use the pointer as is. Also, it's a lot easier on relocations when we
      use the normal way of resolving the address to a function, instead of jumping
      through hoops.
      
      This patch fixes compilation with CONFIG_RELOCATABLE=y.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a22a2dac
  11. 20 5月, 2011 1 次提交
  12. 20 4月, 2011 3 次提交
  13. 24 10月, 2010 2 次提交
  14. 17 5月, 2010 4 次提交
  15. 25 4月, 2010 1 次提交
    • A
      KVM: PPC: Simplify kvmppc_load_up_(FPU|VMX|VSX) · 964b6411
      Alexander Graf 提交于
      We don't need as complex code. I had some thinkos while writing it, figuring
      I needed to support PPC32 paths on PPC64 which would have required DR=0, but
      everything just runs fine with DR=1.
      
      So let's make the functions simple C call wrappers that reserve some space on
      the stack for the respective functions to clobber.
      
      Fixes out-of-RMA-access (and thus guest FPU loading) on the PS3.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      964b6411
  16. 01 3月, 2010 4 次提交
    • A
      KVM: PPC: Add helper functions to call real mode loaders · d5e52813
      Alexander Graf 提交于
      Linux contains quite some bits of code to load FPU, Altivec and VSX lazily for
      a task. It calls those bits in real mode, coming from an interrupt handler.
      
      For KVM we better reuse those, so let's wrap a bit of trampoline magic around
      them and then we can call them from normal module code.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d5e52813
    • A
      KVM: PPC: Call SLB patching code in interrupt safe manner · 021ec9c6
      Alexander Graf 提交于
      Currently we're racy when doing the transition from IR=1 to IR=0, from
      the module memory entry code to the real mode SLB switching code.
      
      To work around that I took a look at the RTAS entry code which is faced
      with a similar problem and did the same thing:
      
        A small helper in linear mapped memory that does mtmsr with IR=0 and
        then RFIs info the actual handler.
      
      Thanks to that trick we can safely take page faults in the entry code
      and only need to be really wary of what to do as of the SLB switching
      part.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      021ec9c6
    • A
      KVM: PPC: Implement 'skip instruction' mode · b4433a7c
      Alexander Graf 提交于
      To fetch the last instruction we were interrupted on, we enable DR in early
      exit code, where we are still in a very transitional phase between guest
      and host state.
      
      Most of the time this seemed to work, but another CPU can easily flush our
      TLB and HTAB which makes us go in the Linux page fault handler which totally
      breaks because we still use the guest's SLB entries.
      
      To work around that, let's introduce a second KVM guest mode that defines
      that whenever we get a trap, we don't call the Linux handler or go into
      the KVM exit code, but just jump over the faulting instruction.
      
      That way a potentially bad lwz doesn't trigger any faults and we can later
      on interpret the invalid instruction we fetched as "fetch didn't work".
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      b4433a7c
    • A
      KVM: PPC: Use PACA backed shadow vcpu · 7e57cba0
      Alexander Graf 提交于
      We're being horribly racy right now. All the entry and exit code hijacks
      random fields from the PACA that could easily be used by different code in
      case we get interrupted, for example by a #MC or even page fault.
      
      After discussing this with Ben, we figured it's best to reserve some more
      space in the PACA and just shove off some vcpu state to there.
      
      That way we can drastically improve the readability of the code, make it
      less racy and less complex.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      7e57cba0
  17. 05 11月, 2009 1 次提交
    • A
      Add interrupt handling code · c862125c
      Alexander Graf 提交于
      Getting from host state to the guest is only half the story. We also need
      to return to our host context and handle whatever happened to get us out of
      the guest.
      
      On PowerPC every guest exit is an interrupt. So all we need to do is trap
      the host's interrupt handlers and get into our #VMEXIT code to handle it.
      
      PowerPCs also have a register that can add an offset to the interrupt handlers'
      adresses which is what the booke KVM code uses. Unfortunately that is a
      hypervisor ressource and we also want to be able to run KVM when we're running
      in an LPAR. So we have to hook into the Linux interrupt handlers.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c862125c