1. 27 1月, 2014 3 次提交
    • P
      KVM: PPC: Book3S HV: Add support for DABRX register on POWER7 · 8563bf52
      Paul Mackerras 提交于
      The DABRX (DABR extension) register on POWER7 processors provides finer
      control over which accesses cause a data breakpoint interrupt.  It
      contains 3 bits which indicate whether to enable accesses in user,
      kernel and hypervisor modes respectively to cause data breakpoint
      interrupts, plus one bit that enables both real mode and virtual mode
      accesses to cause interrupts.  Currently, KVM sets DABRX to allow
      both kernel and user accesses to cause interrupts while in the guest.
      
      This adds support for the guest to specify other values for DABRX.
      PAPR defines a H_SET_XDABR hcall to allow the guest to set both DABR
      and DABRX with one call.  This adds a real-mode implementation of
      H_SET_XDABR, which shares most of its code with the existing H_SET_DABR
      implementation.  To support this, we add a per-vcpu field to store the
      DABRX value plus code to get and set it via the ONE_REG interface.
      
      For Linux guests to use this new hcall, userspace needs to add
      "hcall-xdabr" to the set of strings in the /chosen/hypertas-functions
      property in the device tree.  If userspace does this and then migrates
      the guest to a host where the kernel doesn't include this patch, then
      userspace will need to implement H_SET_XDABR by writing the specified
      DABR value to the DABR using the ONE_REG interface.  In that case, the
      old kernel will set DABRX to DABRX_USER | DABRX_KERNEL.  That should
      still work correctly, at least for Linux guests, since Linux guests
      cope with getting data breakpoint interrupts in modes that weren't
      requested by just ignoring the interrupt, and Linux guests never set
      DABRX_BTI.
      
      The other thing this does is to make H_SET_DABR and H_SET_XDABR work
      on POWER8, which has the DAWR and DAWRX instead of DABR/X.  Guests that
      know about POWER8 should use H_SET_MODE rather than H_SET_[X]DABR, but
      guests running in POWER7 compatibility mode will still use H_SET_[X]DABR.
      For them, this adds the logic to convert DABR/X values into DAWR/X values
      on POWER8.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      8563bf52
    • M
      KVM: PPC: Book3S HV: Context-switch new POWER8 SPRs · b005255e
      Michael Neuling 提交于
      This adds fields to the struct kvm_vcpu_arch to store the new
      guest-accessible SPRs on POWER8, adds code to the get/set_one_reg
      functions to allow userspace to access this state, and adds code to
      the guest entry and exit to context-switch these SPRs between host
      and guest.
      
      Note that DPDES (Directed Privileged Doorbell Exception State) is
      shared between threads on a core; hence we store it in struct
      kvmppc_vcore and have the master thread save and restore it.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      b005255e
    • P
      KVM: PPC: Book3S HV: Align physical and virtual CPU thread numbers · e0b7ec05
      Paul Mackerras 提交于
      On a threaded processor such as POWER7, we group VCPUs into virtual
      cores and arrange that the VCPUs in a virtual core run on the same
      physical core.  Currently we don't enforce any correspondence between
      virtual thread numbers within a virtual core and physical thread
      numbers.  Physical threads are allocated starting at 0 on a first-come
      first-served basis to runnable virtual threads (VCPUs).
      
      POWER8 implements a new "msgsndp" instruction which guest kernels can
      use to interrupt other threads in the same core or sub-core.  Since
      the instruction takes the destination physical thread ID as a parameter,
      it becomes necessary to align the physical thread IDs with the virtual
      thread IDs, that is, to make sure virtual thread N within a virtual
      core always runs on physical thread N.
      
      This means that it's possible that thread 0, which is where we call
      __kvmppc_vcore_entry, may end up running some other vcpu than the
      one whose task called kvmppc_run_core(), or it may end up running
      no vcpu at all, if for example thread 0 of the virtual core is
      currently executing in userspace.  However, we do need thread 0
      to be responsible for switching the MMU -- a previous version of
      this patch that had other threads switching the MMU was found to
      be responsible for occasional memory corruption and machine check
      interrupts in the guest on POWER7 machines.
      
      To accommodate this, we no longer pass the vcpu pointer to
      __kvmppc_vcore_entry, but instead let the assembly code load it from
      the PACA.  Since the assembly code will need to know the kvm pointer
      and the thread ID for threads which don't have a vcpu, we move the
      thread ID into the PACA and we add a kvm pointer to the virtual core
      structure.
      
      In the case where thread 0 has no vcpu to run, it still calls into
      kvmppc_hv_entry in order to do the MMU switch, and then naps until
      either its vcpu is ready to run in the guest, or some other thread
      needs to exit the guest.  In the latter case, thread 0 jumps to the
      code that switches the MMU back to the host.  This control flow means
      that now we switch the MMU before loading any guest vcpu state.
      Similarly, on guest exit we now save all the guest vcpu state before
      switching the MMU back to the host.  This has required substantial
      code movement, making the diff rather large.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      e0b7ec05
  2. 09 1月, 2014 3 次提交
  3. 09 11月, 2013 1 次提交
  4. 06 11月, 2013 1 次提交
  5. 05 11月, 2013 1 次提交
  6. 31 10月, 2013 5 次提交
  7. 30 10月, 2013 7 次提交
  8. 29 10月, 2013 4 次提交
  9. 24 10月, 2013 3 次提交
    • G
      of/irq: simplify args to irq_create_of_mapping · e6d30ab1
      Grant Likely 提交于
      All the callers of irq_create_of_mapping() pass the contents of a struct
      of_phandle_args structure to the function. Since all the callers already
      have an of_phandle_args pointer, why not pass it directly to
      irq_create_of_mapping()?
      Signed-off-by: NGrant Likely <grant.likely@linaro.org>
      Acked-by: NMichal Simek <monstr@monstr.eu>
      Acked-by: NTony Lindgren <tony@atomide.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      e6d30ab1
    • G
      of/irq: Replace of_irq with of_phandle_args · 530210c7
      Grant Likely 提交于
      struct of_irq and struct of_phandle_args are exactly the same structure.
      This patch makes the kernel use of_phandle_args everywhere. This in
      itself isn't a big deal, but it makes some follow-on patches simpler.
      Signed-off-by: NGrant Likely <grant.likely@linaro.org>
      Acked-by: NMichal Simek <monstr@monstr.eu>
      Acked-by: NTony Lindgren <tony@atomide.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      530210c7
    • G
      of/irq: Rename of_irq_map_* functions to of_irq_parse_* · 0c02c800
      Grant Likely 提交于
      The OF irq handling code has been overloading the term 'map' to refer to
      both parsing the data in the device tree and mapping it to the internal
      linux irq system. This is probably because the device tree does have the
      concept of an 'interrupt-map' function for translating interrupt
      references from one node to another, but 'map' is still confusing when
      the primary purpose of some of the functions are to parse the DT data.
      
      This patch renames all the of_irq_map_* functions to of_irq_parse_*
      which makes it clear that there is a difference between the parsing
      phase and the mapping phase. Kernel code can make use of just the
      parsing or just the mapping support as needed by the subsystem.
      
      The patch was generated mechanically with a handful of sed commands.
      Signed-off-by: NGrant Likely <grant.likely@linaro.org>
      Acked-by: NMichal Simek <monstr@monstr.eu>
      Acked-by: NTony Lindgren <tony@atomide.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      0c02c800
  10. 23 10月, 2013 1 次提交
    • P
      powerpc: Don't corrupt user registers on 32-bit · 955c1cab
      Paul Mackerras 提交于
      Commit de79f7b9 ("powerpc: Put FP/VSX and VR state into structures")
      modified load_up_fpu() and load_up_altivec() in such a way that they
      now use r7 and r8.  Unfortunately, the callers of these functions on
      32-bit machines then return to userspace via fast_exception_return,
      which doesn't restore all of the volatile GPRs, but only r1, r3 -- r6
      and r9 -- r12.  This was causing userspace segfaults and other
      userspace misbehaviour on 32-bit machines.
      
      This fixes the problem by changing the register usage of load_up_fpu()
      and load_up_altivec() to avoid using r7 and r8 and instead use r6 and
      r10.  This also adds comments to those functions saying which registers
      may be used.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Tested-by: Scott Wood <scottwood@freescale.com> (on e500mc, so no altivec)
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      955c1cab
  11. 19 10月, 2013 5 次提交
  12. 17 10月, 2013 6 次提交