1. 06 10月, 2012 1 次提交
    • A
      KVM: PPC: Book3S: PR: Rework irq disabling · bd2be683
      Alexander Graf 提交于
      Today, we disable preemption while inside guest context, because we need
      to expose to the world that we are not in a preemptible context. However,
      during that time we already have interrupts disabled, which would indicate
      that we are in a non-preemptible context.
      
      The reason the checks for irqs_disabled() fail for us though is that we
      manually control hard IRQs and ignore all the lazy EE framework. Let's
      stop doing that. Instead, let's always use lazy EE to indicate when we
      want to disable IRQs, but do a special final switch that gets us into
      EE disabled, but soft enabled state. That way when we get back out of
      guest state, we are immediately ready to process interrupts.
      
      This simplifies the code drastically and reduces the time that we appear
      as preempt disabled.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      bd2be683
  2. 31 7月, 2012 1 次提交
  3. 10 7月, 2012 1 次提交
  4. 26 9月, 2011 2 次提交
    • P
      KVM: PPC: book3s_pr: Simplify transitions between virtual and real mode · 02143947
      Paul Mackerras 提交于
      This simplifies the way that the book3s_pr makes the transition to
      real mode when entering the guest.  We now call kvmppc_entry_trampoline
      (renamed from kvmppc_rmcall) in the base kernel using a normal function
      call instead of doing an indirect call through a pointer in the vcpu.
      If kvm is a module, the module loader takes care of generating a
      trampoline as it does for other calls to functions outside the module.
      
      kvmppc_entry_trampoline then disables interrupts and jumps to
      kvmppc_handler_trampoline_enter in real mode using an rfi[d].
      That then uses the link register as the address to return to
      (potentially in module space) when the guest exits.
      
      This also simplifies the way that we call the Linux interrupt handler
      when we exit the guest due to an external, decrementer or performance
      monitor interrupt.  Instead of turning on the MMU, then deciding that
      we need to call the Linux handler and turning the MMU back off again,
      we now go straight to the handler at the point where we would turn the
      MMU on.  The handler will then return to the virtual-mode code
      (potentially in the module).
      
      Along the way, this moves the setting and clearing of the HID5 DCBZ32
      bit into real-mode interrupts-off code, and also makes sure that
      we clear the MSR[RI] bit before loading values into SRR0/1.
      
      The net result is that we no longer need any code addresses to be
      stored in vcpu->arch.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      02143947
    • P
      KVM: PPC: Assemble book3s{,_hv}_rmhandlers.S separately · 177339d7
      Paul Mackerras 提交于
      This makes arch/powerpc/kvm/book3s_rmhandlers.S and
      arch/powerpc/kvm/book3s_hv_rmhandlers.S be assembled as
      separate compilation units rather than having them #included in
      arch/powerpc/kernel/exceptions-64s.S.  We no longer have any
      conditional branches between the exception prologs in
      exceptions-64s.S and the KVM handlers, so there is no need to
      keep their contents close together in the vmlinux image.
      
      In their current location, they are using up part of the limited
      space between the first-level interrupt handlers and the firmware
      NMI data area at offset 0x7000, and with some kernel configurations
      this area will overflow (e.g. allyesconfig), leading to an
      "attempt to .org backwards" error when compiling exceptions-64s.S.
      
      Moving them out requires that we add some #includes that the
      book3s_{,hv_}rmhandlers.S code was previously getting implicitly
      via exceptions-64s.S.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      177339d7
  5. 12 7月, 2011 3 次提交
    • P
      KVM: PPC: Split host-state fields out of kvmppc_book3s_shadow_vcpu · 3c42bf8a
      Paul Mackerras 提交于
      There are several fields in struct kvmppc_book3s_shadow_vcpu that
      temporarily store bits of host state while a guest is running,
      rather than anything relating to the particular guest or vcpu.
      This splits them out into a new kvmppc_host_state structure and
      modifies the definitions in asm-offsets.c to suit.
      
      On 32-bit, we have a kvmppc_host_state structure inside the
      kvmppc_book3s_shadow_vcpu since the assembly code needs to be able
      to get to them both with one pointer.  On 64-bit they are separate
      fields in the PACA.  This means that on 64-bit we don't need to
      copy the kvmppc_host_state in and out on vcpu load/unload, and
      in future will mean that the book3s_hv code doesn't need a
      shadow_vcpu struct in the PACA at all.  That does mean that we
      have to be careful not to rely on any values persisting in the
      hstate field of the paca across any point where we could block
      or get preempted.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3c42bf8a
    • P
      powerpc, KVM: Rework KVM checks in first-level interrupt handlers · b01c8b54
      Paul Mackerras 提交于
      Instead of branching out-of-line with the DO_KVM macro to check if we
      are in a KVM guest at the time of an interrupt, this moves the KVM
      check inline in the first-level interrupt handlers.  This speeds up
      the non-KVM case and makes sure that none of the interrupt handlers
      are missing the check.
      
      Because the first-level interrupt handlers are now larger, some things
      had to be move out of line in exceptions-64s.S.
      
      This all necessitated some minor changes to the interrupt entry code
      in KVM.  This also streamlines the book3s_32 KVM test.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      b01c8b54
    • A
      KVM: PPC: Resolve real-mode handlers through function exports · a22a2dac
      Alexander Graf 提交于
      Up until now, Book3S KVM had variables stored in the kernel that a kernel module
      or the kvm code in the kernel could read from to figure out where some real mode
      helper functions are located.
      
      This is all unnecessary. The high bits of the EA get ignore in real mode, so we
      can just use the pointer as is. Also, it's a lot easier on relocations when we
      use the normal way of resolving the address to a function, instead of jumping
      through hoops.
      
      This patch fixes compilation with CONFIG_RELOCATABLE=y.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a22a2dac
  6. 20 5月, 2011 1 次提交
  7. 20 4月, 2011 3 次提交
  8. 24 10月, 2010 2 次提交
  9. 17 5月, 2010 4 次提交
  10. 25 4月, 2010 1 次提交
    • A
      KVM: PPC: Simplify kvmppc_load_up_(FPU|VMX|VSX) · 964b6411
      Alexander Graf 提交于
      We don't need as complex code. I had some thinkos while writing it, figuring
      I needed to support PPC32 paths on PPC64 which would have required DR=0, but
      everything just runs fine with DR=1.
      
      So let's make the functions simple C call wrappers that reserve some space on
      the stack for the respective functions to clobber.
      
      Fixes out-of-RMA-access (and thus guest FPU loading) on the PS3.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      964b6411
  11. 01 3月, 2010 4 次提交
    • A
      KVM: PPC: Add helper functions to call real mode loaders · d5e52813
      Alexander Graf 提交于
      Linux contains quite some bits of code to load FPU, Altivec and VSX lazily for
      a task. It calls those bits in real mode, coming from an interrupt handler.
      
      For KVM we better reuse those, so let's wrap a bit of trampoline magic around
      them and then we can call them from normal module code.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d5e52813
    • A
      KVM: PPC: Call SLB patching code in interrupt safe manner · 021ec9c6
      Alexander Graf 提交于
      Currently we're racy when doing the transition from IR=1 to IR=0, from
      the module memory entry code to the real mode SLB switching code.
      
      To work around that I took a look at the RTAS entry code which is faced
      with a similar problem and did the same thing:
      
        A small helper in linear mapped memory that does mtmsr with IR=0 and
        then RFIs info the actual handler.
      
      Thanks to that trick we can safely take page faults in the entry code
      and only need to be really wary of what to do as of the SLB switching
      part.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      021ec9c6
    • A
      KVM: PPC: Implement 'skip instruction' mode · b4433a7c
      Alexander Graf 提交于
      To fetch the last instruction we were interrupted on, we enable DR in early
      exit code, where we are still in a very transitional phase between guest
      and host state.
      
      Most of the time this seemed to work, but another CPU can easily flush our
      TLB and HTAB which makes us go in the Linux page fault handler which totally
      breaks because we still use the guest's SLB entries.
      
      To work around that, let's introduce a second KVM guest mode that defines
      that whenever we get a trap, we don't call the Linux handler or go into
      the KVM exit code, but just jump over the faulting instruction.
      
      That way a potentially bad lwz doesn't trigger any faults and we can later
      on interpret the invalid instruction we fetched as "fetch didn't work".
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      b4433a7c
    • A
      KVM: PPC: Use PACA backed shadow vcpu · 7e57cba0
      Alexander Graf 提交于
      We're being horribly racy right now. All the entry and exit code hijacks
      random fields from the PACA that could easily be used by different code in
      case we get interrupted, for example by a #MC or even page fault.
      
      After discussing this with Ben, we figured it's best to reserve some more
      space in the PACA and just shove off some vcpu state to there.
      
      That way we can drastically improve the readability of the code, make it
      less racy and less complex.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      7e57cba0
  12. 05 11月, 2009 1 次提交
    • A
      Add interrupt handling code · c862125c
      Alexander Graf 提交于
      Getting from host state to the guest is only half the story. We also need
      to return to our host context and handle whatever happened to get us out of
      the guest.
      
      On PowerPC every guest exit is an interrupt. So all we need to do is trap
      the host's interrupt handlers and get into our #VMEXIT code to handle it.
      
      PowerPCs also have a register that can add an offset to the interrupt handlers'
      adresses which is what the booke KVM code uses. Unfortunately that is a
      hypervisor ressource and we also want to be able to run KVM when we're running
      in an LPAR. So we have to hook into the Linux interrupt handlers.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c862125c