1. 06 10月, 2012 15 次提交
    • P
      KVM: PPC: Book3S: Get/set guest SPRs using the GET/SET_ONE_REG interface · a136a8bd
      Paul Mackerras 提交于
      This enables userspace to get and set various SPRs (special-purpose
      registers) using the KVM_[GS]ET_ONE_REG ioctls.  With this, userspace
      can get and set all the SPRs that are part of the guest state, either
      through the KVM_[GS]ET_REGS ioctls, the KVM_[GS]ET_SREGS ioctls, or
      the KVM_[GS]ET_ONE_REG ioctls.
      
      The SPRs that are added here are:
      
      - DABR:  Data address breakpoint register
      - DSCR:  Data stream control register
      - PURR:  Processor utilization of resources register
      - SPURR: Scaled PURR
      - DAR:   Data address register
      - DSISR: Data storage interrupt status register
      - AMR:   Authority mask register
      - UAMOR: User authority mask override register
      - MMCR0, MMCR1, MMCRA: Performance monitor unit control registers
      - PMC1..PMC8: Performance monitor unit counter registers
      
      In order to reduce code duplication between PR and HV KVM code, this
      moves the kvm_vcpu_ioctl_[gs]et_one_reg functions into book3s.c and
      centralizes the copying between user and kernel space there.  The
      registers that are handled differently between PR and HV, and those
      that exist only in one flavor, are handled in kvmppc_[gs]et_one_reg()
      functions that are specific to each flavor.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      [agraf: minimal style fixes]
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a136a8bd
    • P
      KVM: PPC: Book3S HV: Fix updates of vcpu->cpu · a47d72f3
      Paul Mackerras 提交于
      This removes the powerpc "generic" updates of vcpu->cpu in load and
      put, and moves them to the various backends.
      
      The reason is that "HV" KVM does its own sauce with that field
      and the generic updates might corrupt it. The field contains the
      CPU# of the -first- HW CPU of the core always for all the VCPU
      threads of a core (the one that's online from a host Linux
      perspective).
      
      However, the preempt notifiers are going to be called on the
      threads VCPUs when they are running (due to them sleeping on our
      private waitqueue) causing unload to be called, potentially
      clobbering the value.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a47d72f3
    • P
      KVM: PPC: Book3S HV: Handle memory slot deletion and modification correctly · dfe49dbd
      Paul Mackerras 提交于
      This adds an implementation of kvm_arch_flush_shadow_memslot for
      Book3S HV, and arranges for kvmppc_core_commit_memory_region to
      flush the dirty log when modifying an existing slot.  With this,
      we can handle deletion and modification of memory slots.
      
      kvm_arch_flush_shadow_memslot calls kvmppc_core_flush_memslot, which
      on Book3S HV now traverses the reverse map chains to remove any HPT
      (hashed page table) entries referring to pages in the memslot.  This
      gets called by generic code whenever deleting a memslot or changing
      the guest physical address for a memslot.
      
      We flush the dirty log in kvmppc_core_commit_memory_region for
      consistency with what x86 does.  We only need to flush when an
      existing memslot is being modified, because for a new memslot the
      rmap array (which stores the dirty bits) is all zero, meaning that
      every page is considered clean already, and when deleting a memslot
      we obviously don't care about the dirty bits any more.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      dfe49dbd
    • P
      KVM: PPC: Move kvm->arch.slot_phys into memslot.arch · a66b48c3
      Paul Mackerras 提交于
      Now that we have an architecture-specific field in the kvm_memory_slot
      structure, we can use it to store the array of page physical addresses
      that we need for Book3S HV KVM on PPC970 processors.  This reduces the
      size of struct kvm_arch for Book3S HV, and also reduces the size of
      struct kvm_arch_memory_slot for other PPC KVM variants since the fields
      in it are now only compiled in for Book3S HV.
      
      This necessitates making the kvm_arch_create_memslot and
      kvm_arch_free_memslot operations specific to each PPC KVM variant.
      That in turn means that we now don't allocate the rmap arrays on
      Book3S PR and Book E.
      
      Since we now unpin pages and free the slot_phys array in
      kvmppc_core_free_memslot, we no longer need to do it in
      kvmppc_core_destroy_vm, since the generic code takes care to free
      all the memslots when destroying a VM.
      
      We now need the new memslot to be passed in to
      kvmppc_core_prepare_memory_region, since we need to initialize its
      arch.slot_phys member on Book3S HV.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      a66b48c3
    • A
      KVM: PPC: Add return value to core_check_requests · 7c973a2e
      Alexander Graf 提交于
      Requests may want to tell us that we need to go back into host state,
      so add a return value for the checks.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      7c973a2e
    • A
      KVM: PPC: Add return value in prepare_to_enter · 7ee78855
      Alexander Graf 提交于
      Our prepare_to_enter helper wants to be able to return in more circumstances
      to the host than only when an interrupt is pending. Broaden the interface a
      bit and move even more generic code to the generic helper.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      7ee78855
    • A
      KVM: PPC: Move kvm_guest_enter call into generic code · 3766a4c6
      Alexander Graf 提交于
      We need to call kvm_guest_enter in booke and book3s, so move its
      call to generic code.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      3766a4c6
    • A
      KVM: PPC: Book3S: PR: Rework irq disabling · bd2be683
      Alexander Graf 提交于
      Today, we disable preemption while inside guest context, because we need
      to expose to the world that we are not in a preemptible context. However,
      during that time we already have interrupts disabled, which would indicate
      that we are in a non-preemptible context.
      
      The reason the checks for irqs_disabled() fail for us though is that we
      manually control hard IRQs and ignore all the lazy EE framework. Let's
      stop doing that. Instead, let's always use lazy EE to indicate when we
      want to disable IRQs, but do a special final switch that gets us into
      EE disabled, but soft enabled state. That way when we get back out of
      guest state, we are immediately ready to process interrupts.
      
      This simplifies the code drastically and reduces the time that we appear
      as preempt disabled.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      bd2be683
    • A
      KVM: PPC: Consistentify vcpu exit path · 24afa37b
      Alexander Graf 提交于
      When getting out of __vcpu_run, let's be consistent about the state we
      return in. We want to always
      
        * have IRQs enabled
        * have called kvm_guest_exit before
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      24afa37b
    • A
      KVM: PPC: Book3S: PR: Indicate we're out of guest mode · 0652eaae
      Alexander Graf 提交于
      When going out of guest mode, indicate that we are in vcpu->mode. That way
      requests from other CPUs don't needlessly need to kick us to process them,
      because it'll just happen next time we enter the guest.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      0652eaae
    • A
      KVM: PPC: Exit guest context while handling exit · 706fb730
      Alexander Graf 提交于
      The x86 implementation of KVM accounts for host time while processing
      guest exits. Do the same for us.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      706fb730
    • A
      KVM: PPC: Book3S: PR: Only do resched check once per exit · c63ddcb4
      Alexander Graf 提交于
      Now that we use our generic exit helper, we can safely drop our previous
      kvm_resched that we used to trigger at the beginning of the exit handler
      function.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      c63ddcb4
    • A
      KVM: PPC: Book3s: PR: Add (dumb) MMU Notifier support · 9b0cb3c8
      Alexander Graf 提交于
      Now that we have very simple MMU Notifier support for e500 in place,
      also add the same simple support to book3s. It gets us one step closer
      to actual fast support.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      9b0cb3c8
    • A
      KVM: PPC: Use same kvmppc_prepare_to_enter code for booke and book3s_pr · 03d25c5b
      Alexander Graf 提交于
      We need to do the same things when preparing to enter a guest for booke and
      book3s_pr cores. Fold the generic code into a generic function that both call.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      03d25c5b
    • A
      KVM: PPC: PR: Use generic tracepoint for guest exit · 97c95059
      Alexander Graf 提交于
      We want to have tracing information on guest exits for booke as well
      as book3s. Since most information is identical, use a common trace point.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      97c95059
  2. 06 8月, 2012 1 次提交
  3. 06 5月, 2012 3 次提交
  4. 08 4月, 2012 4 次提交
  5. 03 4月, 2012 2 次提交
  6. 02 4月, 2012 1 次提交
  7. 20 3月, 2012 1 次提交
  8. 05 3月, 2012 9 次提交
  9. 26 12月, 2011 1 次提交
  10. 17 11月, 2011 1 次提交
  11. 01 11月, 2011 1 次提交
    • P
      powerpc: include export.h for files using EXPORT_SYMBOL/THIS_MODULE · 93087948
      Paul Gortmaker 提交于
      Fix failures in powerpc associated with the previously allowed
      implicit module.h presence that now lead to things like this:
      
      arch/powerpc/mm/mmu_context_hash32.c:76:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL'
      arch/powerpc/mm/tlb_hash32.c:48:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL'
      arch/powerpc/kernel/pci_32.c:51:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL_GPL'
      arch/powerpc/kernel/iomap.c:36:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL'
      arch/powerpc/platforms/44x/canyonlands.c:126:1: error: type defaults to 'int' in declaration of 'EXPORT_SYMBOL'
      arch/powerpc/kvm/44x.c:168:59: error: 'THIS_MODULE' undeclared (first use in this function)
      
      [with several contibutions from Stephen Rothwell <sfr@canb.auug.org.au>]
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      93087948
  12. 26 9月, 2011 1 次提交
    • P
      KVM: PPC: book3s_pr: Simplify transitions between virtual and real mode · 02143947
      Paul Mackerras 提交于
      This simplifies the way that the book3s_pr makes the transition to
      real mode when entering the guest.  We now call kvmppc_entry_trampoline
      (renamed from kvmppc_rmcall) in the base kernel using a normal function
      call instead of doing an indirect call through a pointer in the vcpu.
      If kvm is a module, the module loader takes care of generating a
      trampoline as it does for other calls to functions outside the module.
      
      kvmppc_entry_trampoline then disables interrupts and jumps to
      kvmppc_handler_trampoline_enter in real mode using an rfi[d].
      That then uses the link register as the address to return to
      (potentially in module space) when the guest exits.
      
      This also simplifies the way that we call the Linux interrupt handler
      when we exit the guest due to an external, decrementer or performance
      monitor interrupt.  Instead of turning on the MMU, then deciding that
      we need to call the Linux handler and turning the MMU back off again,
      we now go straight to the handler at the point where we would turn the
      MMU on.  The handler will then return to the virtual-mode code
      (potentially in the module).
      
      Along the way, this moves the setting and clearing of the HID5 DCBZ32
      bit into real-mode interrupts-off code, and also makes sure that
      we clear the MSR[RI] bit before loading values into SRR0/1.
      
      The net result is that we no longer need any code addresses to be
      stored in vcpu->arch.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      02143947